00:00:00.001 Started by upstream project "autotest-nightly" build number 3889 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3269 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.157 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.157 The recommended git tool is: git 00:00:00.157 using credential 00000000-0000-0000-0000-000000000002 00:00:00.180 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.195 Fetching changes from the remote Git repository 00:00:00.197 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.217 Using shallow fetch with depth 1 00:00:00.217 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.217 > git --version # timeout=10 00:00:00.235 > git --version # 'git version 2.39.2' 00:00:00.235 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.250 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.250 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.295 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.308 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.320 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:07.320 > git config core.sparsecheckout # timeout=10 00:00:07.332 > git read-tree -mu HEAD # timeout=10 00:00:07.349 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:07.366 Commit message: "inventory: add WCP3 to free inventory" 00:00:07.366 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:07.455 [Pipeline] Start of Pipeline 00:00:07.470 [Pipeline] library 00:00:07.472 Loading library shm_lib@master 00:00:07.472 Library shm_lib@master is cached. Copying from home. 00:00:07.484 [Pipeline] node 00:00:07.494 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.496 [Pipeline] { 00:00:07.507 [Pipeline] catchError 00:00:07.509 [Pipeline] { 00:00:07.522 [Pipeline] wrap 00:00:07.531 [Pipeline] { 00:00:07.540 [Pipeline] stage 00:00:07.542 [Pipeline] { (Prologue) 00:00:07.563 [Pipeline] echo 00:00:07.564 Node: VM-host-SM9 00:00:07.570 [Pipeline] cleanWs 00:00:07.579 [WS-CLEANUP] Deleting project workspace... 00:00:07.579 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.584 [WS-CLEANUP] done 00:00:07.776 [Pipeline] setCustomBuildProperty 00:00:07.870 [Pipeline] httpRequest 00:00:07.886 [Pipeline] echo 00:00:07.888 Sorcerer 10.211.164.101 is alive 00:00:07.895 [Pipeline] httpRequest 00:00:07.899 HttpMethod: GET 00:00:07.900 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.900 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.916 Response Code: HTTP/1.1 200 OK 00:00:07.917 Success: Status code 200 is in the accepted range: 200,404 00:00:07.917 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:14.633 [Pipeline] sh 00:00:14.912 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:14.928 [Pipeline] httpRequest 00:00:14.948 [Pipeline] echo 00:00:14.950 Sorcerer 10.211.164.101 is alive 00:00:14.959 [Pipeline] httpRequest 00:00:14.963 HttpMethod: GET 00:00:14.964 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:14.964 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:14.984 Response Code: HTTP/1.1 200 OK 00:00:14.984 Success: Status code 200 is in the accepted range: 200,404 00:00:14.985 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:27.104 [Pipeline] sh 00:01:27.384 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:29.958 [Pipeline] sh 00:01:30.349 + git -C spdk log --oneline -n5 00:01:30.349 719d03c6a sock/uring: only register net impl if supported 00:01:30.349 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:30.349 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:30.349 6c7c1f57e accel: add sequence outstanding stat 00:01:30.349 3bc8e6a26 accel: add utility to put task 00:01:30.371 [Pipeline] writeFile 00:01:30.389 [Pipeline] sh 00:01:30.670 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:30.682 [Pipeline] sh 00:01:30.961 + cat autorun-spdk.conf 00:01:30.961 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.961 SPDK_TEST_NVME=1 00:01:30.961 SPDK_TEST_FTL=1 00:01:30.961 SPDK_TEST_ISAL=1 00:01:30.961 SPDK_RUN_ASAN=1 00:01:30.961 SPDK_RUN_UBSAN=1 00:01:30.961 SPDK_TEST_XNVME=1 00:01:30.961 SPDK_TEST_NVME_FDP=1 00:01:30.961 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:30.968 RUN_NIGHTLY=1 00:01:30.970 [Pipeline] } 00:01:30.986 [Pipeline] // stage 00:01:31.000 [Pipeline] stage 00:01:31.002 [Pipeline] { (Run VM) 00:01:31.014 [Pipeline] sh 00:01:31.290 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:31.290 + echo 'Start stage prepare_nvme.sh' 00:01:31.290 Start stage prepare_nvme.sh 00:01:31.290 + [[ -n 2 ]] 00:01:31.290 + disk_prefix=ex2 00:01:31.290 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:31.290 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:31.290 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:31.290 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.290 ++ SPDK_TEST_NVME=1 00:01:31.290 ++ SPDK_TEST_FTL=1 00:01:31.290 ++ SPDK_TEST_ISAL=1 00:01:31.290 ++ SPDK_RUN_ASAN=1 00:01:31.290 ++ SPDK_RUN_UBSAN=1 00:01:31.290 ++ SPDK_TEST_XNVME=1 00:01:31.290 ++ SPDK_TEST_NVME_FDP=1 00:01:31.290 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.290 ++ RUN_NIGHTLY=1 00:01:31.290 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:31.290 + nvme_files=() 00:01:31.290 + declare -A nvme_files 00:01:31.290 + backend_dir=/var/lib/libvirt/images/backends 00:01:31.290 + nvme_files['nvme.img']=5G 00:01:31.290 + nvme_files['nvme-cmb.img']=5G 00:01:31.290 + nvme_files['nvme-multi0.img']=4G 00:01:31.290 + nvme_files['nvme-multi1.img']=4G 00:01:31.290 + nvme_files['nvme-multi2.img']=4G 00:01:31.290 + nvme_files['nvme-openstack.img']=8G 00:01:31.290 + nvme_files['nvme-zns.img']=5G 00:01:31.290 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:31.290 + (( SPDK_TEST_FTL == 1 )) 00:01:31.290 + nvme_files["nvme-ftl.img"]=6G 00:01:31.290 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:31.290 + nvme_files["nvme-fdp.img"]=1G 00:01:31.290 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:31.290 + for nvme in "${!nvme_files[@]}" 00:01:31.290 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:31.290 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.290 + for nvme in "${!nvme_files[@]}" 00:01:31.290 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:01:31.549 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:31.549 + for nvme in "${!nvme_files[@]}" 00:01:31.549 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:31.549 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.549 + for nvme in "${!nvme_files[@]}" 00:01:31.549 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:31.549 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:31.549 + for nvme in "${!nvme_files[@]}" 00:01:31.549 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:31.549 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.549 + for nvme in "${!nvme_files[@]}" 00:01:31.549 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:31.809 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.809 + for nvme in "${!nvme_files[@]}" 00:01:31.809 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:32.067 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.067 + for nvme in "${!nvme_files[@]}" 00:01:32.067 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:01:32.067 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:32.067 + for nvme in "${!nvme_files[@]}" 00:01:32.067 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:32.067 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.067 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:32.067 + echo 'End stage prepare_nvme.sh' 00:01:32.067 End stage prepare_nvme.sh 00:01:32.078 [Pipeline] sh 00:01:32.359 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:32.359 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:01:32.359 00:01:32.359 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:32.359 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:32.359 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:32.359 HELP=0 00:01:32.359 DRY_RUN=0 00:01:32.359 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:01:32.359 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:32.359 NVME_AUTO_CREATE=0 00:01:32.359 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:01:32.359 NVME_CMB=,,,, 00:01:32.359 NVME_PMR=,,,, 00:01:32.359 NVME_ZNS=,,,, 00:01:32.359 NVME_MS=true,,,, 00:01:32.359 NVME_FDP=,,,on, 00:01:32.359 SPDK_VAGRANT_DISTRO=fedora38 00:01:32.359 SPDK_VAGRANT_VMCPU=10 00:01:32.359 SPDK_VAGRANT_VMRAM=12288 00:01:32.359 SPDK_VAGRANT_PROVIDER=libvirt 00:01:32.360 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:32.360 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:32.360 SPDK_OPENSTACK_NETWORK=0 00:01:32.360 VAGRANT_PACKAGE_BOX=0 00:01:32.360 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:32.360 FORCE_DISTRO=true 00:01:32.360 VAGRANT_BOX_VERSION= 00:01:32.360 EXTRA_VAGRANTFILES= 00:01:32.360 NIC_MODEL=e1000 00:01:32.360 00:01:32.360 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:01:32.360 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:35.648 Bringing machine 'default' up with 'libvirt' provider... 00:01:35.907 ==> default: Creating image (snapshot of base box volume). 00:01:35.907 ==> default: Creating domain with the following settings... 00:01:35.907 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720990847_505e636f43e5f2456a15 00:01:35.907 ==> default: -- Domain type: kvm 00:01:35.907 ==> default: -- Cpus: 10 00:01:35.907 ==> default: -- Feature: acpi 00:01:35.907 ==> default: -- Feature: apic 00:01:35.907 ==> default: -- Feature: pae 00:01:35.907 ==> default: -- Memory: 12288M 00:01:35.907 ==> default: -- Memory Backing: hugepages: 00:01:35.907 ==> default: -- Management MAC: 00:01:35.907 ==> default: -- Loader: 00:01:35.907 ==> default: -- Nvram: 00:01:35.907 ==> default: -- Base box: spdk/fedora38 00:01:35.907 ==> default: -- Storage pool: default 00:01:35.907 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720990847_505e636f43e5f2456a15.img (20G) 00:01:35.907 ==> default: -- Volume Cache: default 00:01:35.907 ==> default: -- Kernel: 00:01:35.907 ==> default: -- Initrd: 00:01:35.907 ==> default: -- Graphics Type: vnc 00:01:35.907 ==> default: -- Graphics Port: -1 00:01:35.907 ==> default: -- Graphics IP: 127.0.0.1 00:01:35.907 ==> default: -- Graphics Password: Not defined 00:01:35.907 ==> default: -- Video Type: cirrus 00:01:35.907 ==> default: -- Video VRAM: 9216 00:01:35.907 ==> default: -- Sound Type: 00:01:35.908 ==> default: -- Keymap: en-us 00:01:35.908 ==> default: -- TPM Path: 00:01:35.908 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:35.908 ==> default: -- Command line args: 00:01:35.908 ==> default: -> value=-device, 00:01:35.908 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:35.908 ==> default: -> value=-drive, 00:01:35.908 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:35.908 ==> default: -> value=-device, 00:01:35.908 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:35.908 ==> default: -> value=-device, 00:01:35.908 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:35.908 ==> default: -> value=-drive, 00:01:35.908 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:01:35.908 ==> default: -> value=-device, 00:01:35.908 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.908 ==> default: -> value=-device, 00:01:35.908 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:35.908 ==> default: -> value=-drive, 00:01:35.908 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:35.908 ==> default: -> value=-device, 00:01:35.908 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.908 ==> default: -> value=-drive, 00:01:35.908 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:35.908 ==> default: -> value=-device, 00:01:35.908 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.908 ==> default: -> value=-drive, 00:01:35.908 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:35.908 ==> default: -> value=-device, 00:01:35.908 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.908 ==> default: -> value=-device, 00:01:35.908 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:35.908 ==> default: -> value=-device, 00:01:35.908 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:35.908 ==> default: -> value=-drive, 00:01:35.908 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:35.908 ==> default: -> value=-device, 00:01:35.908 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.169 ==> default: Creating shared folders metadata... 00:01:36.169 ==> default: Starting domain. 00:01:37.109 ==> default: Waiting for domain to get an IP address... 00:01:55.193 ==> default: Waiting for SSH to become available... 00:01:55.193 ==> default: Configuring and enabling network interfaces... 00:01:58.483 default: SSH address: 192.168.121.136:22 00:01:58.483 default: SSH username: vagrant 00:01:58.483 default: SSH auth method: private key 00:02:00.406 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:08.521 ==> default: Mounting SSHFS shared folder... 00:02:09.455 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:09.455 ==> default: Checking Mount.. 00:02:10.391 ==> default: Folder Successfully Mounted! 00:02:10.391 ==> default: Running provisioner: file... 00:02:11.326 default: ~/.gitconfig => .gitconfig 00:02:11.584 00:02:11.585 SUCCESS! 00:02:11.585 00:02:11.585 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:11.585 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:11.585 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:11.585 00:02:11.593 [Pipeline] } 00:02:11.610 [Pipeline] // stage 00:02:11.620 [Pipeline] dir 00:02:11.620 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:02:11.622 [Pipeline] { 00:02:11.636 [Pipeline] catchError 00:02:11.638 [Pipeline] { 00:02:11.656 [Pipeline] sh 00:02:11.990 + vagrant ssh-config --host vagrant 00:02:11.990 + sed -ne /^Host/,$p 00:02:11.990 + tee ssh_conf 00:02:15.275 Host vagrant 00:02:15.275 HostName 192.168.121.136 00:02:15.275 User vagrant 00:02:15.275 Port 22 00:02:15.275 UserKnownHostsFile /dev/null 00:02:15.275 StrictHostKeyChecking no 00:02:15.275 PasswordAuthentication no 00:02:15.275 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:15.275 IdentitiesOnly yes 00:02:15.275 LogLevel FATAL 00:02:15.275 ForwardAgent yes 00:02:15.275 ForwardX11 yes 00:02:15.275 00:02:15.289 [Pipeline] withEnv 00:02:15.292 [Pipeline] { 00:02:15.309 [Pipeline] sh 00:02:15.590 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:15.590 source /etc/os-release 00:02:15.590 [[ -e /image.version ]] && img=$(< /image.version) 00:02:15.590 # Minimal, systemd-like check. 00:02:15.590 if [[ -e /.dockerenv ]]; then 00:02:15.590 # Clear garbage from the node's name: 00:02:15.590 # agt-er_autotest_547-896 -> autotest_547-896 00:02:15.590 # $HOSTNAME is the actual container id 00:02:15.590 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:15.590 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:15.590 # We can assume this is a mount from a host where container is running, 00:02:15.590 # so fetch its hostname to easily identify the target swarm worker. 00:02:15.590 container="$(< /etc/hostname) ($agent)" 00:02:15.590 else 00:02:15.590 # Fallback 00:02:15.590 container=$agent 00:02:15.590 fi 00:02:15.590 fi 00:02:15.590 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:15.590 00:02:15.861 [Pipeline] } 00:02:15.882 [Pipeline] // withEnv 00:02:15.892 [Pipeline] setCustomBuildProperty 00:02:15.908 [Pipeline] stage 00:02:15.910 [Pipeline] { (Tests) 00:02:15.930 [Pipeline] sh 00:02:16.211 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:16.485 [Pipeline] sh 00:02:16.765 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:17.040 [Pipeline] timeout 00:02:17.040 Timeout set to expire in 40 min 00:02:17.042 [Pipeline] { 00:02:17.059 [Pipeline] sh 00:02:17.339 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:17.907 HEAD is now at 719d03c6a sock/uring: only register net impl if supported 00:02:17.921 [Pipeline] sh 00:02:18.205 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:18.478 [Pipeline] sh 00:02:18.759 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:19.037 [Pipeline] sh 00:02:19.321 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:19.322 ++ readlink -f spdk_repo 00:02:19.322 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:19.322 + [[ -n /home/vagrant/spdk_repo ]] 00:02:19.322 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:19.322 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:19.322 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:19.322 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:19.322 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:19.322 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:19.322 + cd /home/vagrant/spdk_repo 00:02:19.322 + source /etc/os-release 00:02:19.322 ++ NAME='Fedora Linux' 00:02:19.322 ++ VERSION='38 (Cloud Edition)' 00:02:19.322 ++ ID=fedora 00:02:19.322 ++ VERSION_ID=38 00:02:19.322 ++ VERSION_CODENAME= 00:02:19.322 ++ PLATFORM_ID=platform:f38 00:02:19.322 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:19.322 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:19.322 ++ LOGO=fedora-logo-icon 00:02:19.322 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:19.322 ++ HOME_URL=https://fedoraproject.org/ 00:02:19.322 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:19.322 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:19.322 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:19.322 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:19.322 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:19.322 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:19.322 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:19.322 ++ SUPPORT_END=2024-05-14 00:02:19.322 ++ VARIANT='Cloud Edition' 00:02:19.322 ++ VARIANT_ID=cloud 00:02:19.322 + uname -a 00:02:19.581 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:19.581 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:19.840 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:20.099 Hugepages 00:02:20.099 node hugesize free / total 00:02:20.099 node0 1048576kB 0 / 0 00:02:20.099 node0 2048kB 0 / 0 00:02:20.099 00:02:20.099 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:20.099 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:20.099 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:20.099 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:20.358 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:20.358 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:20.358 + rm -f /tmp/spdk-ld-path 00:02:20.358 + source autorun-spdk.conf 00:02:20.358 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.358 ++ SPDK_TEST_NVME=1 00:02:20.358 ++ SPDK_TEST_FTL=1 00:02:20.358 ++ SPDK_TEST_ISAL=1 00:02:20.358 ++ SPDK_RUN_ASAN=1 00:02:20.358 ++ SPDK_RUN_UBSAN=1 00:02:20.358 ++ SPDK_TEST_XNVME=1 00:02:20.358 ++ SPDK_TEST_NVME_FDP=1 00:02:20.358 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:20.358 ++ RUN_NIGHTLY=1 00:02:20.358 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:20.358 + [[ -n '' ]] 00:02:20.358 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:20.358 + for M in /var/spdk/build-*-manifest.txt 00:02:20.358 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:20.358 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:20.358 + for M in /var/spdk/build-*-manifest.txt 00:02:20.358 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:20.358 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:20.358 ++ uname 00:02:20.358 + [[ Linux == \L\i\n\u\x ]] 00:02:20.358 + sudo dmesg -T 00:02:20.358 + sudo dmesg --clear 00:02:20.358 + dmesg_pid=5191 00:02:20.358 + [[ Fedora Linux == FreeBSD ]] 00:02:20.358 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:20.358 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:20.358 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:20.358 + [[ -x /usr/src/fio-static/fio ]] 00:02:20.358 + sudo dmesg -Tw 00:02:20.358 + export FIO_BIN=/usr/src/fio-static/fio 00:02:20.358 + FIO_BIN=/usr/src/fio-static/fio 00:02:20.359 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:20.359 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:20.359 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:20.359 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:20.359 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:20.359 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:20.359 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:20.359 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:20.359 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:20.359 Test configuration: 00:02:20.359 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.359 SPDK_TEST_NVME=1 00:02:20.359 SPDK_TEST_FTL=1 00:02:20.359 SPDK_TEST_ISAL=1 00:02:20.359 SPDK_RUN_ASAN=1 00:02:20.359 SPDK_RUN_UBSAN=1 00:02:20.359 SPDK_TEST_XNVME=1 00:02:20.359 SPDK_TEST_NVME_FDP=1 00:02:20.359 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:20.617 RUN_NIGHTLY=1 21:01:31 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:20.617 21:01:31 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:20.617 21:01:31 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:20.617 21:01:31 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:20.618 21:01:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.618 21:01:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.618 21:01:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.618 21:01:31 -- paths/export.sh@5 -- $ export PATH 00:02:20.618 21:01:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.618 21:01:31 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:20.618 21:01:31 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:20.618 21:01:31 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720990891.XXXXXX 00:02:20.618 21:01:31 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720990891.ZBTO9M 00:02:20.618 21:01:31 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:20.618 21:01:31 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:20.618 21:01:31 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:20.618 21:01:31 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:20.618 21:01:31 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:20.618 21:01:31 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:20.618 21:01:31 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:20.618 21:01:31 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.618 21:01:31 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:20.618 21:01:31 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:20.618 21:01:31 -- pm/common@17 -- $ local monitor 00:02:20.618 21:01:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.618 21:01:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.618 21:01:31 -- pm/common@25 -- $ sleep 1 00:02:20.618 21:01:31 -- pm/common@21 -- $ date +%s 00:02:20.618 21:01:31 -- pm/common@21 -- $ date +%s 00:02:20.618 21:01:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720990891 00:02:20.618 21:01:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720990891 00:02:20.618 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720990891_collect-cpu-load.pm.log 00:02:20.618 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720990891_collect-vmstat.pm.log 00:02:21.555 21:01:32 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:21.555 21:01:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:21.555 21:01:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:21.555 21:01:32 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:21.555 21:01:32 -- spdk/autobuild.sh@16 -- $ date -u 00:02:21.555 Sun Jul 14 09:01:32 PM UTC 2024 00:02:21.555 21:01:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:21.555 v24.09-pre-202-g719d03c6a 00:02:21.555 21:01:32 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:21.555 21:01:32 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:21.555 21:01:32 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:21.555 21:01:32 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:21.555 21:01:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.555 ************************************ 00:02:21.555 START TEST asan 00:02:21.555 ************************************ 00:02:21.555 using asan 00:02:21.555 21:01:32 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:02:21.555 00:02:21.555 real 0m0.000s 00:02:21.555 user 0m0.000s 00:02:21.555 sys 0m0.000s 00:02:21.555 21:01:32 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:21.555 ************************************ 00:02:21.555 21:01:32 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:21.555 END TEST asan 00:02:21.555 ************************************ 00:02:21.555 21:01:33 -- common/autotest_common.sh@1142 -- $ return 0 00:02:21.555 21:01:33 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:21.555 21:01:33 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:21.555 21:01:33 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:21.555 21:01:33 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:21.555 21:01:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.555 ************************************ 00:02:21.555 START TEST ubsan 00:02:21.555 ************************************ 00:02:21.555 using ubsan 00:02:21.555 21:01:33 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:21.555 00:02:21.555 real 0m0.000s 00:02:21.555 user 0m0.000s 00:02:21.555 sys 0m0.000s 00:02:21.555 21:01:33 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:21.555 ************************************ 00:02:21.555 END TEST ubsan 00:02:21.555 ************************************ 00:02:21.555 21:01:33 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:21.555 21:01:33 -- common/autotest_common.sh@1142 -- $ return 0 00:02:21.555 21:01:33 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:21.555 21:01:33 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:21.555 21:01:33 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:21.555 21:01:33 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:21.555 21:01:33 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:21.555 21:01:33 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:21.555 21:01:33 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:21.555 21:01:33 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:21.555 21:01:33 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:21.813 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:21.813 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:22.071 Using 'verbs' RDMA provider 00:02:38.399 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:48.369 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:48.627 Creating mk/config.mk...done. 00:02:48.627 Creating mk/cc.flags.mk...done. 00:02:48.627 Type 'make' to build. 00:02:48.627 21:02:00 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:48.627 21:02:00 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:48.627 21:02:00 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:48.627 21:02:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:48.627 ************************************ 00:02:48.627 START TEST make 00:02:48.627 ************************************ 00:02:48.627 21:02:00 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:48.886 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:48.886 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:48.886 meson setup builddir \ 00:02:48.886 -Dwith-libaio=enabled \ 00:02:48.886 -Dwith-liburing=enabled \ 00:02:48.886 -Dwith-libvfn=disabled \ 00:02:48.886 -Dwith-spdk=false && \ 00:02:48.886 meson compile -C builddir && \ 00:02:48.886 cd -) 00:02:49.144 make[1]: Nothing to be done for 'all'. 00:02:51.671 The Meson build system 00:02:51.671 Version: 1.3.1 00:02:51.671 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:51.671 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:51.671 Build type: native build 00:02:51.671 Project name: xnvme 00:02:51.671 Project version: 0.7.3 00:02:51.671 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:51.671 C linker for the host machine: cc ld.bfd 2.39-16 00:02:51.671 Host machine cpu family: x86_64 00:02:51.671 Host machine cpu: x86_64 00:02:51.671 Message: host_machine.system: linux 00:02:51.671 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:51.671 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:51.671 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:51.671 Run-time dependency threads found: YES 00:02:51.671 Has header "setupapi.h" : NO 00:02:51.671 Has header "linux/blkzoned.h" : YES 00:02:51.671 Has header "linux/blkzoned.h" : YES (cached) 00:02:51.671 Has header "libaio.h" : YES 00:02:51.671 Library aio found: YES 00:02:51.671 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:51.671 Run-time dependency liburing found: YES 2.2 00:02:51.671 Dependency libvfn skipped: feature with-libvfn disabled 00:02:51.671 Run-time dependency appleframeworks found: NO (tried framework) 00:02:51.671 Run-time dependency appleframeworks found: NO (tried framework) 00:02:51.671 Configuring xnvme_config.h using configuration 00:02:51.671 Configuring xnvme.spec using configuration 00:02:51.671 Run-time dependency bash-completion found: YES 2.11 00:02:51.672 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:51.672 Program cp found: YES (/usr/bin/cp) 00:02:51.672 Has header "winsock2.h" : NO 00:02:51.672 Has header "dbghelp.h" : NO 00:02:51.672 Library rpcrt4 found: NO 00:02:51.672 Library rt found: YES 00:02:51.672 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:51.672 Found CMake: /usr/bin/cmake (3.27.7) 00:02:51.672 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:51.672 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:51.672 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:51.672 Build targets in project: 32 00:02:51.672 00:02:51.672 xnvme 0.7.3 00:02:51.672 00:02:51.672 User defined options 00:02:51.672 with-libaio : enabled 00:02:51.672 with-liburing: enabled 00:02:51.672 with-libvfn : disabled 00:02:51.672 with-spdk : false 00:02:51.672 00:02:51.672 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:52.235 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:52.235 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:52.235 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:52.235 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:52.235 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:52.235 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:52.235 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:52.235 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:52.235 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:52.235 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:52.235 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:52.235 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:52.492 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:52.492 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:52.492 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:52.492 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:52.492 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:52.492 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:52.492 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:52.492 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:52.492 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:52.492 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:52.492 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:52.492 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:52.492 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:52.492 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:52.492 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:52.749 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:52.749 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:52.749 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:52.749 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:52.749 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:52.749 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:52.749 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:52.749 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:52.749 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:52.749 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:52.749 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:52.749 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:52.749 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:52.749 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:52.749 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:52.749 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:52.749 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:52.749 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:52.749 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:52.749 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:52.749 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:52.749 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:52.749 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:52.749 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:52.749 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:52.749 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:52.749 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:52.749 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:52.749 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:52.749 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:52.749 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:52.749 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:53.006 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:53.006 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:53.006 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:53.006 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:53.006 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:53.006 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:53.006 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:53.006 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:53.006 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:53.006 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:53.006 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:53.006 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:53.006 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:53.006 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:53.006 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:53.264 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:53.264 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:53.264 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:53.264 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:53.264 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:53.264 [79/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:53.264 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:53.264 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:53.264 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:53.264 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:53.264 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:53.264 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:53.264 [86/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:53.264 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:53.264 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:53.522 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:53.522 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:53.522 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:53.522 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:53.522 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:53.522 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:53.522 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:53.522 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:53.522 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:53.522 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:53.522 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:53.522 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:53.522 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:53.522 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:53.522 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:53.522 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:53.522 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:53.522 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:53.522 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:53.522 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:53.522 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:53.522 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:53.522 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:53.522 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:53.522 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:53.522 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:53.522 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:53.522 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:53.522 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:53.780 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:53.780 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:53.780 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:53.780 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:53.780 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:53.780 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:53.780 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:53.780 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:53.780 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:53.780 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:53.780 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:53.780 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:53.780 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:53.780 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:53.780 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:53.780 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:54.038 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:54.038 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:54.038 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:54.038 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:54.038 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:54.038 [139/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:54.038 [140/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:54.038 [141/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:54.038 [142/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:54.038 [143/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:54.038 [144/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:54.038 [145/203] Linking target lib/libxnvme.so 00:02:54.038 [146/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:54.295 [147/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:54.295 [148/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:54.295 [149/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:54.295 [150/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:54.295 [151/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:54.295 [152/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:54.295 [153/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:54.295 [154/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:54.295 [155/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:54.295 [156/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:54.295 [157/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:54.295 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:54.295 [159/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:54.295 [160/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:54.295 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:54.553 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:54.553 [163/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:54.553 [164/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:54.553 [165/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:54.553 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:54.553 [167/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:54.553 [168/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:54.553 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:54.553 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:54.553 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:54.553 [172/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:54.811 [173/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:54.811 [174/203] Linking static target lib/libxnvme.a 00:02:54.811 [175/203] Linking target tests/xnvme_tests_async_intf 00:02:54.811 [176/203] Linking target tests/xnvme_tests_enum 00:02:54.811 [177/203] Linking target tests/xnvme_tests_buf 00:02:54.811 [178/203] Linking target tests/xnvme_tests_znd_append 00:02:54.811 [179/203] Linking target tests/xnvme_tests_scc 00:02:54.811 [180/203] Linking target tests/xnvme_tests_lblk 00:02:54.811 [181/203] Linking target tests/xnvme_tests_cli 00:02:54.811 [182/203] Linking target tests/xnvme_tests_xnvme_file 00:02:54.811 [183/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:54.811 [184/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:54.811 [185/203] Linking target tests/xnvme_tests_ioworker 00:02:54.811 [186/203] Linking target tests/xnvme_tests_kvs 00:02:54.811 [187/203] Linking target tests/xnvme_tests_znd_state 00:02:54.811 [188/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:54.811 [189/203] Linking target tests/xnvme_tests_map 00:02:54.811 [190/203] Linking target tools/xdd 00:02:54.811 [191/203] Linking target tools/zoned 00:02:54.811 [192/203] Linking target tools/kvs 00:02:54.811 [193/203] Linking target tools/lblk 00:02:54.811 [194/203] Linking target tools/xnvme_file 00:02:54.811 [195/203] Linking target examples/xnvme_hello 00:02:54.811 [196/203] Linking target tools/xnvme 00:02:54.811 [197/203] Linking target examples/xnvme_dev 00:02:54.811 [198/203] Linking target examples/xnvme_enum 00:02:54.811 [199/203] Linking target examples/xnvme_io_async 00:02:54.811 [200/203] Linking target examples/xnvme_single_async 00:02:54.811 [201/203] Linking target examples/xnvme_single_sync 00:02:54.811 [202/203] Linking target examples/zoned_io_sync 00:02:54.811 [203/203] Linking target examples/zoned_io_async 00:02:54.811 INFO: autodetecting backend as ninja 00:02:54.811 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:55.068 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:03.174 The Meson build system 00:03:03.174 Version: 1.3.1 00:03:03.175 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:03.175 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:03.175 Build type: native build 00:03:03.175 Program cat found: YES (/usr/bin/cat) 00:03:03.175 Project name: DPDK 00:03:03.175 Project version: 24.03.0 00:03:03.175 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:03.175 C linker for the host machine: cc ld.bfd 2.39-16 00:03:03.175 Host machine cpu family: x86_64 00:03:03.175 Host machine cpu: x86_64 00:03:03.175 Message: ## Building in Developer Mode ## 00:03:03.175 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:03.175 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:03.175 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:03.175 Program python3 found: YES (/usr/bin/python3) 00:03:03.175 Program cat found: YES (/usr/bin/cat) 00:03:03.175 Compiler for C supports arguments -march=native: YES 00:03:03.175 Checking for size of "void *" : 8 00:03:03.175 Checking for size of "void *" : 8 (cached) 00:03:03.175 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:03.175 Library m found: YES 00:03:03.175 Library numa found: YES 00:03:03.175 Has header "numaif.h" : YES 00:03:03.175 Library fdt found: NO 00:03:03.175 Library execinfo found: NO 00:03:03.175 Has header "execinfo.h" : YES 00:03:03.175 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:03.175 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:03.175 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:03.175 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:03.175 Run-time dependency openssl found: YES 3.0.9 00:03:03.175 Run-time dependency libpcap found: YES 1.10.4 00:03:03.175 Has header "pcap.h" with dependency libpcap: YES 00:03:03.175 Compiler for C supports arguments -Wcast-qual: YES 00:03:03.175 Compiler for C supports arguments -Wdeprecated: YES 00:03:03.175 Compiler for C supports arguments -Wformat: YES 00:03:03.175 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:03.175 Compiler for C supports arguments -Wformat-security: NO 00:03:03.175 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:03.175 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:03.175 Compiler for C supports arguments -Wnested-externs: YES 00:03:03.175 Compiler for C supports arguments -Wold-style-definition: YES 00:03:03.175 Compiler for C supports arguments -Wpointer-arith: YES 00:03:03.175 Compiler for C supports arguments -Wsign-compare: YES 00:03:03.175 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:03.175 Compiler for C supports arguments -Wundef: YES 00:03:03.175 Compiler for C supports arguments -Wwrite-strings: YES 00:03:03.175 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:03.175 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:03.175 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:03.175 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:03.175 Program objdump found: YES (/usr/bin/objdump) 00:03:03.175 Compiler for C supports arguments -mavx512f: YES 00:03:03.175 Checking if "AVX512 checking" compiles: YES 00:03:03.175 Fetching value of define "__SSE4_2__" : 1 00:03:03.175 Fetching value of define "__AES__" : 1 00:03:03.175 Fetching value of define "__AVX__" : 1 00:03:03.175 Fetching value of define "__AVX2__" : 1 00:03:03.175 Fetching value of define "__AVX512BW__" : (undefined) 00:03:03.175 Fetching value of define "__AVX512CD__" : (undefined) 00:03:03.175 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:03.175 Fetching value of define "__AVX512F__" : (undefined) 00:03:03.175 Fetching value of define "__AVX512VL__" : (undefined) 00:03:03.175 Fetching value of define "__PCLMUL__" : 1 00:03:03.175 Fetching value of define "__RDRND__" : 1 00:03:03.175 Fetching value of define "__RDSEED__" : 1 00:03:03.175 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:03.175 Fetching value of define "__znver1__" : (undefined) 00:03:03.175 Fetching value of define "__znver2__" : (undefined) 00:03:03.175 Fetching value of define "__znver3__" : (undefined) 00:03:03.175 Fetching value of define "__znver4__" : (undefined) 00:03:03.175 Library asan found: YES 00:03:03.175 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:03.175 Message: lib/log: Defining dependency "log" 00:03:03.175 Message: lib/kvargs: Defining dependency "kvargs" 00:03:03.175 Message: lib/telemetry: Defining dependency "telemetry" 00:03:03.175 Library rt found: YES 00:03:03.175 Checking for function "getentropy" : NO 00:03:03.175 Message: lib/eal: Defining dependency "eal" 00:03:03.175 Message: lib/ring: Defining dependency "ring" 00:03:03.175 Message: lib/rcu: Defining dependency "rcu" 00:03:03.175 Message: lib/mempool: Defining dependency "mempool" 00:03:03.175 Message: lib/mbuf: Defining dependency "mbuf" 00:03:03.175 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:03.175 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:03.175 Compiler for C supports arguments -mpclmul: YES 00:03:03.175 Compiler for C supports arguments -maes: YES 00:03:03.175 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:03.175 Compiler for C supports arguments -mavx512bw: YES 00:03:03.175 Compiler for C supports arguments -mavx512dq: YES 00:03:03.175 Compiler for C supports arguments -mavx512vl: YES 00:03:03.175 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:03.175 Compiler for C supports arguments -mavx2: YES 00:03:03.175 Compiler for C supports arguments -mavx: YES 00:03:03.175 Message: lib/net: Defining dependency "net" 00:03:03.175 Message: lib/meter: Defining dependency "meter" 00:03:03.175 Message: lib/ethdev: Defining dependency "ethdev" 00:03:03.175 Message: lib/pci: Defining dependency "pci" 00:03:03.175 Message: lib/cmdline: Defining dependency "cmdline" 00:03:03.175 Message: lib/hash: Defining dependency "hash" 00:03:03.175 Message: lib/timer: Defining dependency "timer" 00:03:03.175 Message: lib/compressdev: Defining dependency "compressdev" 00:03:03.175 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:03.175 Message: lib/dmadev: Defining dependency "dmadev" 00:03:03.175 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:03.175 Message: lib/power: Defining dependency "power" 00:03:03.175 Message: lib/reorder: Defining dependency "reorder" 00:03:03.175 Message: lib/security: Defining dependency "security" 00:03:03.175 Has header "linux/userfaultfd.h" : YES 00:03:03.175 Has header "linux/vduse.h" : YES 00:03:03.175 Message: lib/vhost: Defining dependency "vhost" 00:03:03.175 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:03.175 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:03.175 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:03.175 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:03.175 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:03.175 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:03.175 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:03.175 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:03.175 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:03.175 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:03.175 Program doxygen found: YES (/usr/bin/doxygen) 00:03:03.175 Configuring doxy-api-html.conf using configuration 00:03:03.175 Configuring doxy-api-man.conf using configuration 00:03:03.175 Program mandb found: YES (/usr/bin/mandb) 00:03:03.175 Program sphinx-build found: NO 00:03:03.175 Configuring rte_build_config.h using configuration 00:03:03.175 Message: 00:03:03.175 ================= 00:03:03.175 Applications Enabled 00:03:03.175 ================= 00:03:03.175 00:03:03.175 apps: 00:03:03.175 00:03:03.175 00:03:03.175 Message: 00:03:03.175 ================= 00:03:03.175 Libraries Enabled 00:03:03.175 ================= 00:03:03.175 00:03:03.175 libs: 00:03:03.175 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:03.175 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:03.175 cryptodev, dmadev, power, reorder, security, vhost, 00:03:03.175 00:03:03.175 Message: 00:03:03.175 =============== 00:03:03.175 Drivers Enabled 00:03:03.175 =============== 00:03:03.175 00:03:03.175 common: 00:03:03.175 00:03:03.175 bus: 00:03:03.175 pci, vdev, 00:03:03.175 mempool: 00:03:03.175 ring, 00:03:03.175 dma: 00:03:03.175 00:03:03.175 net: 00:03:03.175 00:03:03.175 crypto: 00:03:03.175 00:03:03.175 compress: 00:03:03.175 00:03:03.175 vdpa: 00:03:03.175 00:03:03.175 00:03:03.175 Message: 00:03:03.175 ================= 00:03:03.175 Content Skipped 00:03:03.175 ================= 00:03:03.175 00:03:03.175 apps: 00:03:03.175 dumpcap: explicitly disabled via build config 00:03:03.175 graph: explicitly disabled via build config 00:03:03.175 pdump: explicitly disabled via build config 00:03:03.175 proc-info: explicitly disabled via build config 00:03:03.175 test-acl: explicitly disabled via build config 00:03:03.175 test-bbdev: explicitly disabled via build config 00:03:03.175 test-cmdline: explicitly disabled via build config 00:03:03.175 test-compress-perf: explicitly disabled via build config 00:03:03.175 test-crypto-perf: explicitly disabled via build config 00:03:03.175 test-dma-perf: explicitly disabled via build config 00:03:03.175 test-eventdev: explicitly disabled via build config 00:03:03.176 test-fib: explicitly disabled via build config 00:03:03.176 test-flow-perf: explicitly disabled via build config 00:03:03.176 test-gpudev: explicitly disabled via build config 00:03:03.176 test-mldev: explicitly disabled via build config 00:03:03.176 test-pipeline: explicitly disabled via build config 00:03:03.176 test-pmd: explicitly disabled via build config 00:03:03.176 test-regex: explicitly disabled via build config 00:03:03.176 test-sad: explicitly disabled via build config 00:03:03.176 test-security-perf: explicitly disabled via build config 00:03:03.176 00:03:03.176 libs: 00:03:03.176 argparse: explicitly disabled via build config 00:03:03.176 metrics: explicitly disabled via build config 00:03:03.176 acl: explicitly disabled via build config 00:03:03.176 bbdev: explicitly disabled via build config 00:03:03.176 bitratestats: explicitly disabled via build config 00:03:03.176 bpf: explicitly disabled via build config 00:03:03.176 cfgfile: explicitly disabled via build config 00:03:03.176 distributor: explicitly disabled via build config 00:03:03.176 efd: explicitly disabled via build config 00:03:03.176 eventdev: explicitly disabled via build config 00:03:03.176 dispatcher: explicitly disabled via build config 00:03:03.176 gpudev: explicitly disabled via build config 00:03:03.176 gro: explicitly disabled via build config 00:03:03.176 gso: explicitly disabled via build config 00:03:03.176 ip_frag: explicitly disabled via build config 00:03:03.176 jobstats: explicitly disabled via build config 00:03:03.176 latencystats: explicitly disabled via build config 00:03:03.176 lpm: explicitly disabled via build config 00:03:03.176 member: explicitly disabled via build config 00:03:03.176 pcapng: explicitly disabled via build config 00:03:03.176 rawdev: explicitly disabled via build config 00:03:03.176 regexdev: explicitly disabled via build config 00:03:03.176 mldev: explicitly disabled via build config 00:03:03.176 rib: explicitly disabled via build config 00:03:03.176 sched: explicitly disabled via build config 00:03:03.176 stack: explicitly disabled via build config 00:03:03.176 ipsec: explicitly disabled via build config 00:03:03.176 pdcp: explicitly disabled via build config 00:03:03.176 fib: explicitly disabled via build config 00:03:03.176 port: explicitly disabled via build config 00:03:03.176 pdump: explicitly disabled via build config 00:03:03.176 table: explicitly disabled via build config 00:03:03.176 pipeline: explicitly disabled via build config 00:03:03.176 graph: explicitly disabled via build config 00:03:03.176 node: explicitly disabled via build config 00:03:03.176 00:03:03.176 drivers: 00:03:03.176 common/cpt: not in enabled drivers build config 00:03:03.176 common/dpaax: not in enabled drivers build config 00:03:03.176 common/iavf: not in enabled drivers build config 00:03:03.176 common/idpf: not in enabled drivers build config 00:03:03.176 common/ionic: not in enabled drivers build config 00:03:03.176 common/mvep: not in enabled drivers build config 00:03:03.176 common/octeontx: not in enabled drivers build config 00:03:03.176 bus/auxiliary: not in enabled drivers build config 00:03:03.176 bus/cdx: not in enabled drivers build config 00:03:03.176 bus/dpaa: not in enabled drivers build config 00:03:03.176 bus/fslmc: not in enabled drivers build config 00:03:03.176 bus/ifpga: not in enabled drivers build config 00:03:03.176 bus/platform: not in enabled drivers build config 00:03:03.176 bus/uacce: not in enabled drivers build config 00:03:03.176 bus/vmbus: not in enabled drivers build config 00:03:03.176 common/cnxk: not in enabled drivers build config 00:03:03.176 common/mlx5: not in enabled drivers build config 00:03:03.176 common/nfp: not in enabled drivers build config 00:03:03.176 common/nitrox: not in enabled drivers build config 00:03:03.176 common/qat: not in enabled drivers build config 00:03:03.176 common/sfc_efx: not in enabled drivers build config 00:03:03.176 mempool/bucket: not in enabled drivers build config 00:03:03.176 mempool/cnxk: not in enabled drivers build config 00:03:03.176 mempool/dpaa: not in enabled drivers build config 00:03:03.176 mempool/dpaa2: not in enabled drivers build config 00:03:03.176 mempool/octeontx: not in enabled drivers build config 00:03:03.176 mempool/stack: not in enabled drivers build config 00:03:03.176 dma/cnxk: not in enabled drivers build config 00:03:03.176 dma/dpaa: not in enabled drivers build config 00:03:03.176 dma/dpaa2: not in enabled drivers build config 00:03:03.176 dma/hisilicon: not in enabled drivers build config 00:03:03.176 dma/idxd: not in enabled drivers build config 00:03:03.176 dma/ioat: not in enabled drivers build config 00:03:03.176 dma/skeleton: not in enabled drivers build config 00:03:03.176 net/af_packet: not in enabled drivers build config 00:03:03.176 net/af_xdp: not in enabled drivers build config 00:03:03.176 net/ark: not in enabled drivers build config 00:03:03.176 net/atlantic: not in enabled drivers build config 00:03:03.176 net/avp: not in enabled drivers build config 00:03:03.176 net/axgbe: not in enabled drivers build config 00:03:03.176 net/bnx2x: not in enabled drivers build config 00:03:03.176 net/bnxt: not in enabled drivers build config 00:03:03.176 net/bonding: not in enabled drivers build config 00:03:03.176 net/cnxk: not in enabled drivers build config 00:03:03.176 net/cpfl: not in enabled drivers build config 00:03:03.176 net/cxgbe: not in enabled drivers build config 00:03:03.176 net/dpaa: not in enabled drivers build config 00:03:03.176 net/dpaa2: not in enabled drivers build config 00:03:03.176 net/e1000: not in enabled drivers build config 00:03:03.176 net/ena: not in enabled drivers build config 00:03:03.176 net/enetc: not in enabled drivers build config 00:03:03.176 net/enetfec: not in enabled drivers build config 00:03:03.176 net/enic: not in enabled drivers build config 00:03:03.176 net/failsafe: not in enabled drivers build config 00:03:03.176 net/fm10k: not in enabled drivers build config 00:03:03.176 net/gve: not in enabled drivers build config 00:03:03.176 net/hinic: not in enabled drivers build config 00:03:03.176 net/hns3: not in enabled drivers build config 00:03:03.176 net/i40e: not in enabled drivers build config 00:03:03.176 net/iavf: not in enabled drivers build config 00:03:03.176 net/ice: not in enabled drivers build config 00:03:03.176 net/idpf: not in enabled drivers build config 00:03:03.176 net/igc: not in enabled drivers build config 00:03:03.176 net/ionic: not in enabled drivers build config 00:03:03.176 net/ipn3ke: not in enabled drivers build config 00:03:03.176 net/ixgbe: not in enabled drivers build config 00:03:03.176 net/mana: not in enabled drivers build config 00:03:03.176 net/memif: not in enabled drivers build config 00:03:03.176 net/mlx4: not in enabled drivers build config 00:03:03.176 net/mlx5: not in enabled drivers build config 00:03:03.176 net/mvneta: not in enabled drivers build config 00:03:03.176 net/mvpp2: not in enabled drivers build config 00:03:03.176 net/netvsc: not in enabled drivers build config 00:03:03.176 net/nfb: not in enabled drivers build config 00:03:03.176 net/nfp: not in enabled drivers build config 00:03:03.176 net/ngbe: not in enabled drivers build config 00:03:03.176 net/null: not in enabled drivers build config 00:03:03.176 net/octeontx: not in enabled drivers build config 00:03:03.176 net/octeon_ep: not in enabled drivers build config 00:03:03.176 net/pcap: not in enabled drivers build config 00:03:03.176 net/pfe: not in enabled drivers build config 00:03:03.176 net/qede: not in enabled drivers build config 00:03:03.176 net/ring: not in enabled drivers build config 00:03:03.176 net/sfc: not in enabled drivers build config 00:03:03.176 net/softnic: not in enabled drivers build config 00:03:03.176 net/tap: not in enabled drivers build config 00:03:03.176 net/thunderx: not in enabled drivers build config 00:03:03.176 net/txgbe: not in enabled drivers build config 00:03:03.176 net/vdev_netvsc: not in enabled drivers build config 00:03:03.176 net/vhost: not in enabled drivers build config 00:03:03.176 net/virtio: not in enabled drivers build config 00:03:03.176 net/vmxnet3: not in enabled drivers build config 00:03:03.176 raw/*: missing internal dependency, "rawdev" 00:03:03.176 crypto/armv8: not in enabled drivers build config 00:03:03.176 crypto/bcmfs: not in enabled drivers build config 00:03:03.176 crypto/caam_jr: not in enabled drivers build config 00:03:03.176 crypto/ccp: not in enabled drivers build config 00:03:03.176 crypto/cnxk: not in enabled drivers build config 00:03:03.176 crypto/dpaa_sec: not in enabled drivers build config 00:03:03.176 crypto/dpaa2_sec: not in enabled drivers build config 00:03:03.176 crypto/ipsec_mb: not in enabled drivers build config 00:03:03.176 crypto/mlx5: not in enabled drivers build config 00:03:03.176 crypto/mvsam: not in enabled drivers build config 00:03:03.176 crypto/nitrox: not in enabled drivers build config 00:03:03.176 crypto/null: not in enabled drivers build config 00:03:03.176 crypto/octeontx: not in enabled drivers build config 00:03:03.176 crypto/openssl: not in enabled drivers build config 00:03:03.176 crypto/scheduler: not in enabled drivers build config 00:03:03.176 crypto/uadk: not in enabled drivers build config 00:03:03.176 crypto/virtio: not in enabled drivers build config 00:03:03.176 compress/isal: not in enabled drivers build config 00:03:03.176 compress/mlx5: not in enabled drivers build config 00:03:03.176 compress/nitrox: not in enabled drivers build config 00:03:03.176 compress/octeontx: not in enabled drivers build config 00:03:03.176 compress/zlib: not in enabled drivers build config 00:03:03.176 regex/*: missing internal dependency, "regexdev" 00:03:03.176 ml/*: missing internal dependency, "mldev" 00:03:03.176 vdpa/ifc: not in enabled drivers build config 00:03:03.176 vdpa/mlx5: not in enabled drivers build config 00:03:03.176 vdpa/nfp: not in enabled drivers build config 00:03:03.176 vdpa/sfc: not in enabled drivers build config 00:03:03.176 event/*: missing internal dependency, "eventdev" 00:03:03.177 baseband/*: missing internal dependency, "bbdev" 00:03:03.177 gpu/*: missing internal dependency, "gpudev" 00:03:03.177 00:03:03.177 00:03:03.177 Build targets in project: 85 00:03:03.177 00:03:03.177 DPDK 24.03.0 00:03:03.177 00:03:03.177 User defined options 00:03:03.177 buildtype : debug 00:03:03.177 default_library : shared 00:03:03.177 libdir : lib 00:03:03.177 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:03.177 b_sanitize : address 00:03:03.177 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:03.177 c_link_args : 00:03:03.177 cpu_instruction_set: native 00:03:03.177 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:03.177 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:03.177 enable_docs : false 00:03:03.177 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:03.177 enable_kmods : false 00:03:03.177 max_lcores : 128 00:03:03.177 tests : false 00:03:03.177 00:03:03.177 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:03.177 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:03.177 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:03.177 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:03.177 [3/268] Linking static target lib/librte_kvargs.a 00:03:03.177 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:03.177 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:03.177 [6/268] Linking static target lib/librte_log.a 00:03:03.744 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.744 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:04.003 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:04.003 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:04.003 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:04.003 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:04.003 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:04.003 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:04.261 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:04.261 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:04.261 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:04.261 [18/268] Linking static target lib/librte_telemetry.a 00:03:04.261 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.520 [20/268] Linking target lib/librte_log.so.24.1 00:03:04.779 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:04.779 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:04.779 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:04.779 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:05.037 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:05.037 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:05.037 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:05.037 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:05.037 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:05.037 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:05.296 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.296 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:05.296 [33/268] Linking target lib/librte_telemetry.so.24.1 00:03:05.296 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:05.554 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:05.554 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:05.554 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:05.554 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:05.812 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:05.812 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:06.070 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:06.070 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:06.070 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:06.070 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:06.070 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:06.329 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:06.329 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:06.588 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:06.588 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:06.588 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:06.588 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:06.588 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:06.846 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:07.106 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:07.106 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:07.106 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:07.364 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:07.364 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:07.364 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:07.364 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:07.623 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:07.623 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:07.623 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:07.882 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:07.882 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:08.140 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:08.140 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:08.140 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:08.399 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:08.399 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:08.399 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:08.399 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:08.658 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:08.658 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:08.658 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:08.658 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:08.917 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:08.917 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:09.175 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:09.175 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:09.175 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:09.175 [82/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:09.175 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:09.175 [84/268] Linking static target lib/librte_ring.a 00:03:09.433 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:09.433 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:09.433 [87/268] Linking static target lib/librte_eal.a 00:03:09.691 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:09.691 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:09.948 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:09.948 [91/268] Linking static target lib/librte_rcu.a 00:03:09.948 [92/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.948 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:09.948 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:09.948 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:09.948 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:09.949 [97/268] Linking static target lib/librte_mempool.a 00:03:10.206 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:10.464 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:10.464 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.464 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:10.464 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:10.721 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:10.721 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:10.721 [105/268] Linking static target lib/librte_mbuf.a 00:03:10.721 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:10.721 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:10.979 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:10.979 [109/268] Linking static target lib/librte_meter.a 00:03:11.238 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:11.238 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:11.238 [112/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:11.238 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.238 [114/268] Linking static target lib/librte_net.a 00:03:11.496 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:11.496 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.496 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:11.754 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.754 [119/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.011 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:12.269 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:12.527 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:12.527 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:12.785 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:12.785 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:13.042 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:13.042 [127/268] Linking static target lib/librte_pci.a 00:03:13.042 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:13.042 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:13.042 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:13.301 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:13.301 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:13.301 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:13.301 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:13.301 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:13.301 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:13.301 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:13.301 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:13.301 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:13.301 [140/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.301 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:13.565 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:13.565 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:13.565 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:13.565 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:13.831 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:13.831 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:14.089 [148/268] Linking static target lib/librte_cmdline.a 00:03:14.089 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:14.089 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:14.089 [151/268] Linking static target lib/librte_ethdev.a 00:03:14.347 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:14.347 [153/268] Linking static target lib/librte_timer.a 00:03:14.347 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:14.347 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:14.605 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:14.605 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:14.863 [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.863 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:15.121 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:15.121 [161/268] Linking static target lib/librte_compressdev.a 00:03:15.121 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:15.121 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:15.379 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:15.379 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:15.636 [166/268] Linking static target lib/librte_dmadev.a 00:03:15.636 [167/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:15.636 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:15.636 [169/268] Linking static target lib/librte_hash.a 00:03:15.637 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.637 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:15.637 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:15.637 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:15.894 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.152 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:16.152 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:16.152 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.410 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:16.410 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:16.410 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:16.410 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:16.410 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:16.410 [183/268] Linking static target lib/librte_cryptodev.a 00:03:16.668 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.926 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:16.926 [186/268] Linking static target lib/librte_power.a 00:03:16.926 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:16.926 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:17.183 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:17.183 [190/268] Linking static target lib/librte_reorder.a 00:03:17.183 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:17.183 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:17.183 [193/268] Linking static target lib/librte_security.a 00:03:17.440 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.698 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:17.698 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.698 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.698 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:18.264 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:18.264 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:18.264 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:18.264 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:18.264 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.264 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:18.523 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:18.781 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:18.781 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:18.781 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:18.781 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:19.040 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:19.040 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:19.040 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:19.040 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:19.040 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:19.040 [215/268] Linking static target drivers/librte_bus_vdev.a 00:03:19.040 [216/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:19.298 [217/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:19.298 [218/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:19.298 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:19.298 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:19.298 [221/268] Linking static target drivers/librte_bus_pci.a 00:03:19.298 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.298 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:19.298 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:19.298 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:19.298 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:19.865 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.123 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.123 [229/268] Linking target lib/librte_eal.so.24.1 00:03:20.381 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:20.381 [231/268] Linking target lib/librte_timer.so.24.1 00:03:20.381 [232/268] Linking target lib/librte_ring.so.24.1 00:03:20.381 [233/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:20.381 [234/268] Linking target lib/librte_dmadev.so.24.1 00:03:20.381 [235/268] Linking target lib/librte_pci.so.24.1 00:03:20.381 [236/268] Linking target lib/librte_meter.so.24.1 00:03:20.381 [237/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:20.381 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:20.381 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:20.381 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:20.381 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:20.640 [242/268] Linking target lib/librte_rcu.so.24.1 00:03:20.640 [243/268] Linking target lib/librte_mempool.so.24.1 00:03:20.640 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:20.640 [245/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:20.640 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:20.640 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:20.640 [248/268] Linking target lib/librte_mbuf.so.24.1 00:03:20.898 [249/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:20.898 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:20.898 [251/268] Linking target lib/librte_net.so.24.1 00:03:20.898 [252/268] Linking target lib/librte_reorder.so.24.1 00:03:20.898 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:03:20.898 [254/268] Linking target lib/librte_compressdev.so.24.1 00:03:21.156 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:21.156 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:21.156 [257/268] Linking target lib/librte_cmdline.so.24.1 00:03:21.156 [258/268] Linking target lib/librte_security.so.24.1 00:03:21.156 [259/268] Linking target lib/librte_hash.so.24.1 00:03:21.156 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.156 [261/268] Linking target lib/librte_ethdev.so.24.1 00:03:21.414 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:21.414 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:21.414 [264/268] Linking target lib/librte_power.so.24.1 00:03:24.697 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:24.697 [266/268] Linking static target lib/librte_vhost.a 00:03:26.075 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.075 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:26.075 INFO: autodetecting backend as ninja 00:03:26.075 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:27.450 CC lib/ut_mock/mock.o 00:03:27.450 CC lib/ut/ut.o 00:03:27.450 CC lib/log/log.o 00:03:27.450 CC lib/log/log_deprecated.o 00:03:27.450 CC lib/log/log_flags.o 00:03:27.450 LIB libspdk_ut_mock.a 00:03:27.450 LIB libspdk_ut.a 00:03:27.450 LIB libspdk_log.a 00:03:27.450 SO libspdk_ut_mock.so.6.0 00:03:27.450 SO libspdk_ut.so.2.0 00:03:27.450 SO libspdk_log.so.7.0 00:03:27.708 SYMLINK libspdk_ut_mock.so 00:03:27.708 SYMLINK libspdk_ut.so 00:03:27.708 SYMLINK libspdk_log.so 00:03:27.708 CC lib/ioat/ioat.o 00:03:27.708 CC lib/dma/dma.o 00:03:27.708 CC lib/util/base64.o 00:03:27.708 CC lib/util/bit_array.o 00:03:27.987 CC lib/util/cpuset.o 00:03:27.987 CXX lib/trace_parser/trace.o 00:03:27.987 CC lib/util/crc16.o 00:03:27.987 CC lib/util/crc32.o 00:03:27.987 CC lib/util/crc32c.o 00:03:27.987 CC lib/util/crc32_ieee.o 00:03:27.987 CC lib/vfio_user/host/vfio_user_pci.o 00:03:27.987 CC lib/util/crc64.o 00:03:27.987 LIB libspdk_dma.a 00:03:27.987 CC lib/vfio_user/host/vfio_user.o 00:03:27.987 SO libspdk_dma.so.4.0 00:03:27.987 CC lib/util/dif.o 00:03:27.987 CC lib/util/fd.o 00:03:28.266 CC lib/util/file.o 00:03:28.266 SYMLINK libspdk_dma.so 00:03:28.266 CC lib/util/hexlify.o 00:03:28.266 CC lib/util/iov.o 00:03:28.266 LIB libspdk_ioat.a 00:03:28.266 SO libspdk_ioat.so.7.0 00:03:28.266 CC lib/util/math.o 00:03:28.266 CC lib/util/pipe.o 00:03:28.266 CC lib/util/strerror_tls.o 00:03:28.266 SYMLINK libspdk_ioat.so 00:03:28.266 CC lib/util/string.o 00:03:28.266 CC lib/util/uuid.o 00:03:28.266 LIB libspdk_vfio_user.a 00:03:28.266 CC lib/util/fd_group.o 00:03:28.266 SO libspdk_vfio_user.so.5.0 00:03:28.523 CC lib/util/xor.o 00:03:28.523 CC lib/util/zipf.o 00:03:28.523 SYMLINK libspdk_vfio_user.so 00:03:28.781 LIB libspdk_util.a 00:03:29.039 SO libspdk_util.so.9.1 00:03:29.039 LIB libspdk_trace_parser.a 00:03:29.039 SO libspdk_trace_parser.so.5.0 00:03:29.039 SYMLINK libspdk_util.so 00:03:29.297 SYMLINK libspdk_trace_parser.so 00:03:29.297 CC lib/conf/conf.o 00:03:29.297 CC lib/rdma_provider/common.o 00:03:29.297 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:29.297 CC lib/json/json_parse.o 00:03:29.297 CC lib/json/json_util.o 00:03:29.297 CC lib/json/json_write.o 00:03:29.297 CC lib/rdma_utils/rdma_utils.o 00:03:29.297 CC lib/vmd/vmd.o 00:03:29.297 CC lib/env_dpdk/env.o 00:03:29.297 CC lib/idxd/idxd.o 00:03:29.555 CC lib/env_dpdk/memory.o 00:03:29.555 CC lib/vmd/led.o 00:03:29.555 CC lib/env_dpdk/pci.o 00:03:29.555 LIB libspdk_rdma_provider.a 00:03:29.555 LIB libspdk_json.a 00:03:29.811 SO libspdk_rdma_provider.so.6.0 00:03:29.811 LIB libspdk_rdma_utils.a 00:03:29.811 SO libspdk_json.so.6.0 00:03:29.811 LIB libspdk_conf.a 00:03:29.811 SO libspdk_rdma_utils.so.1.0 00:03:29.811 SO libspdk_conf.so.6.0 00:03:29.811 SYMLINK libspdk_rdma_provider.so 00:03:29.811 CC lib/env_dpdk/init.o 00:03:29.811 CC lib/idxd/idxd_user.o 00:03:29.811 SYMLINK libspdk_json.so 00:03:29.811 CC lib/env_dpdk/threads.o 00:03:29.811 SYMLINK libspdk_rdma_utils.so 00:03:29.811 CC lib/env_dpdk/pci_ioat.o 00:03:29.811 SYMLINK libspdk_conf.so 00:03:30.068 CC lib/env_dpdk/pci_virtio.o 00:03:30.068 CC lib/env_dpdk/pci_vmd.o 00:03:30.068 CC lib/jsonrpc/jsonrpc_server.o 00:03:30.068 CC lib/env_dpdk/pci_idxd.o 00:03:30.068 CC lib/idxd/idxd_kernel.o 00:03:30.068 CC lib/env_dpdk/pci_event.o 00:03:30.068 CC lib/env_dpdk/sigbus_handler.o 00:03:30.068 CC lib/env_dpdk/pci_dpdk.o 00:03:30.068 LIB libspdk_vmd.a 00:03:30.068 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:30.325 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:30.325 SO libspdk_vmd.so.6.0 00:03:30.325 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:30.325 CC lib/jsonrpc/jsonrpc_client.o 00:03:30.325 LIB libspdk_idxd.a 00:03:30.325 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:30.325 SO libspdk_idxd.so.12.0 00:03:30.325 SYMLINK libspdk_vmd.so 00:03:30.325 SYMLINK libspdk_idxd.so 00:03:30.582 LIB libspdk_jsonrpc.a 00:03:30.582 SO libspdk_jsonrpc.so.6.0 00:03:30.582 SYMLINK libspdk_jsonrpc.so 00:03:30.839 CC lib/rpc/rpc.o 00:03:31.097 LIB libspdk_rpc.a 00:03:31.097 SO libspdk_rpc.so.6.0 00:03:31.355 LIB libspdk_env_dpdk.a 00:03:31.355 SYMLINK libspdk_rpc.so 00:03:31.355 SO libspdk_env_dpdk.so.14.1 00:03:31.355 CC lib/notify/notify.o 00:03:31.355 CC lib/keyring/keyring.o 00:03:31.355 CC lib/notify/notify_rpc.o 00:03:31.355 CC lib/keyring/keyring_rpc.o 00:03:31.613 CC lib/trace/trace.o 00:03:31.613 CC lib/trace/trace_flags.o 00:03:31.613 CC lib/trace/trace_rpc.o 00:03:31.613 SYMLINK libspdk_env_dpdk.so 00:03:31.613 LIB libspdk_notify.a 00:03:31.613 SO libspdk_notify.so.6.0 00:03:31.613 SYMLINK libspdk_notify.so 00:03:31.870 LIB libspdk_keyring.a 00:03:31.870 SO libspdk_keyring.so.1.0 00:03:31.870 LIB libspdk_trace.a 00:03:31.870 SO libspdk_trace.so.10.0 00:03:31.870 SYMLINK libspdk_keyring.so 00:03:31.870 SYMLINK libspdk_trace.so 00:03:32.128 CC lib/thread/thread.o 00:03:32.128 CC lib/thread/iobuf.o 00:03:32.128 CC lib/sock/sock.o 00:03:32.128 CC lib/sock/sock_rpc.o 00:03:32.695 LIB libspdk_sock.a 00:03:32.695 SO libspdk_sock.so.10.0 00:03:32.695 SYMLINK libspdk_sock.so 00:03:33.261 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:33.261 CC lib/nvme/nvme_ctrlr.o 00:03:33.261 CC lib/nvme/nvme_fabric.o 00:03:33.261 CC lib/nvme/nvme_ns_cmd.o 00:03:33.261 CC lib/nvme/nvme_ns.o 00:03:33.261 CC lib/nvme/nvme_pcie_common.o 00:03:33.261 CC lib/nvme/nvme_pcie.o 00:03:33.261 CC lib/nvme/nvme_qpair.o 00:03:33.261 CC lib/nvme/nvme.o 00:03:33.827 CC lib/nvme/nvme_quirks.o 00:03:34.085 CC lib/nvme/nvme_transport.o 00:03:34.085 CC lib/nvme/nvme_discovery.o 00:03:34.085 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:34.343 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:34.343 LIB libspdk_thread.a 00:03:34.343 CC lib/nvme/nvme_tcp.o 00:03:34.343 CC lib/nvme/nvme_opal.o 00:03:34.343 SO libspdk_thread.so.10.1 00:03:34.343 SYMLINK libspdk_thread.so 00:03:34.343 CC lib/nvme/nvme_io_msg.o 00:03:34.601 CC lib/nvme/nvme_poll_group.o 00:03:34.601 CC lib/nvme/nvme_zns.o 00:03:34.601 CC lib/nvme/nvme_stubs.o 00:03:34.859 CC lib/accel/accel.o 00:03:34.859 CC lib/accel/accel_rpc.o 00:03:34.859 CC lib/nvme/nvme_auth.o 00:03:34.859 CC lib/nvme/nvme_cuse.o 00:03:35.117 CC lib/accel/accel_sw.o 00:03:35.117 CC lib/nvme/nvme_rdma.o 00:03:35.375 CC lib/blob/blobstore.o 00:03:35.375 CC lib/blob/request.o 00:03:35.375 CC lib/init/json_config.o 00:03:35.375 CC lib/virtio/virtio.o 00:03:35.633 CC lib/init/subsystem.o 00:03:35.891 CC lib/blob/zeroes.o 00:03:35.891 CC lib/virtio/virtio_vhost_user.o 00:03:35.891 CC lib/init/subsystem_rpc.o 00:03:35.891 CC lib/init/rpc.o 00:03:35.891 CC lib/virtio/virtio_vfio_user.o 00:03:35.891 CC lib/blob/blob_bs_dev.o 00:03:36.149 CC lib/virtio/virtio_pci.o 00:03:36.149 LIB libspdk_accel.a 00:03:36.149 LIB libspdk_init.a 00:03:36.149 SO libspdk_accel.so.15.1 00:03:36.149 SO libspdk_init.so.5.0 00:03:36.149 SYMLINK libspdk_accel.so 00:03:36.149 SYMLINK libspdk_init.so 00:03:36.408 LIB libspdk_virtio.a 00:03:36.408 CC lib/bdev/bdev.o 00:03:36.408 CC lib/bdev/bdev_rpc.o 00:03:36.408 CC lib/bdev/bdev_zone.o 00:03:36.408 CC lib/bdev/part.o 00:03:36.408 CC lib/bdev/scsi_nvme.o 00:03:36.408 SO libspdk_virtio.so.7.0 00:03:36.408 CC lib/event/app.o 00:03:36.408 CC lib/event/reactor.o 00:03:36.408 SYMLINK libspdk_virtio.so 00:03:36.408 CC lib/event/log_rpc.o 00:03:36.666 CC lib/event/app_rpc.o 00:03:36.666 CC lib/event/scheduler_static.o 00:03:36.666 LIB libspdk_nvme.a 00:03:36.925 SO libspdk_nvme.so.13.1 00:03:36.925 LIB libspdk_event.a 00:03:37.184 SO libspdk_event.so.14.0 00:03:37.184 SYMLINK libspdk_event.so 00:03:37.443 SYMLINK libspdk_nvme.so 00:03:39.974 LIB libspdk_blob.a 00:03:39.974 SO libspdk_blob.so.11.0 00:03:39.974 SYMLINK libspdk_blob.so 00:03:39.974 LIB libspdk_bdev.a 00:03:39.974 SO libspdk_bdev.so.15.1 00:03:39.974 CC lib/blobfs/blobfs.o 00:03:39.974 CC lib/blobfs/tree.o 00:03:39.974 CC lib/lvol/lvol.o 00:03:39.974 SYMLINK libspdk_bdev.so 00:03:40.231 CC lib/ftl/ftl_core.o 00:03:40.232 CC lib/ftl/ftl_init.o 00:03:40.232 CC lib/nbd/nbd.o 00:03:40.232 CC lib/ftl/ftl_debug.o 00:03:40.232 CC lib/nvmf/ctrlr.o 00:03:40.232 CC lib/ftl/ftl_layout.o 00:03:40.232 CC lib/ublk/ublk.o 00:03:40.232 CC lib/scsi/dev.o 00:03:40.490 CC lib/nvmf/ctrlr_discovery.o 00:03:40.490 CC lib/scsi/lun.o 00:03:40.490 CC lib/nvmf/ctrlr_bdev.o 00:03:40.748 CC lib/ftl/ftl_io.o 00:03:40.748 CC lib/ftl/ftl_sb.o 00:03:40.748 CC lib/nbd/nbd_rpc.o 00:03:41.018 CC lib/scsi/port.o 00:03:41.018 CC lib/ftl/ftl_l2p.o 00:03:41.018 CC lib/ublk/ublk_rpc.o 00:03:41.018 LIB libspdk_nbd.a 00:03:41.018 SO libspdk_nbd.so.7.0 00:03:41.018 CC lib/scsi/scsi.o 00:03:41.018 CC lib/nvmf/subsystem.o 00:03:41.304 SYMLINK libspdk_nbd.so 00:03:41.304 CC lib/scsi/scsi_bdev.o 00:03:41.304 CC lib/ftl/ftl_l2p_flat.o 00:03:41.304 LIB libspdk_blobfs.a 00:03:41.304 LIB libspdk_ublk.a 00:03:41.304 SO libspdk_blobfs.so.10.0 00:03:41.304 LIB libspdk_lvol.a 00:03:41.304 CC lib/nvmf/nvmf.o 00:03:41.304 SO libspdk_ublk.so.3.0 00:03:41.304 SO libspdk_lvol.so.10.0 00:03:41.304 SYMLINK libspdk_blobfs.so 00:03:41.304 CC lib/nvmf/nvmf_rpc.o 00:03:41.304 CC lib/ftl/ftl_nv_cache.o 00:03:41.304 SYMLINK libspdk_lvol.so 00:03:41.304 CC lib/ftl/ftl_band.o 00:03:41.304 SYMLINK libspdk_ublk.so 00:03:41.304 CC lib/ftl/ftl_band_ops.o 00:03:41.304 CC lib/ftl/ftl_writer.o 00:03:41.562 CC lib/ftl/ftl_rq.o 00:03:41.818 CC lib/ftl/ftl_reloc.o 00:03:41.818 CC lib/scsi/scsi_pr.o 00:03:41.818 CC lib/ftl/ftl_l2p_cache.o 00:03:41.818 CC lib/scsi/scsi_rpc.o 00:03:41.818 CC lib/ftl/ftl_p2l.o 00:03:42.075 CC lib/nvmf/transport.o 00:03:42.075 CC lib/scsi/task.o 00:03:42.075 CC lib/ftl/mngt/ftl_mngt.o 00:03:42.332 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:42.332 LIB libspdk_scsi.a 00:03:42.332 CC lib/nvmf/tcp.o 00:03:42.332 CC lib/nvmf/stubs.o 00:03:42.332 SO libspdk_scsi.so.9.0 00:03:42.589 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:42.589 CC lib/nvmf/mdns_server.o 00:03:42.589 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:42.589 SYMLINK libspdk_scsi.so 00:03:42.589 CC lib/nvmf/rdma.o 00:03:42.589 CC lib/nvmf/auth.o 00:03:42.589 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:42.847 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:42.847 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:42.847 CC lib/iscsi/conn.o 00:03:42.847 CC lib/vhost/vhost.o 00:03:42.847 CC lib/vhost/vhost_rpc.o 00:03:43.104 CC lib/vhost/vhost_scsi.o 00:03:43.104 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:43.104 CC lib/iscsi/init_grp.o 00:03:43.104 CC lib/iscsi/iscsi.o 00:03:43.104 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:43.361 CC lib/iscsi/md5.o 00:03:43.618 CC lib/iscsi/param.o 00:03:43.618 CC lib/vhost/vhost_blk.o 00:03:43.618 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:43.618 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:43.618 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:43.618 CC lib/vhost/rte_vhost_user.o 00:03:43.875 CC lib/iscsi/portal_grp.o 00:03:43.875 CC lib/iscsi/tgt_node.o 00:03:43.875 CC lib/iscsi/iscsi_subsystem.o 00:03:44.132 CC lib/iscsi/iscsi_rpc.o 00:03:44.132 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:44.132 CC lib/iscsi/task.o 00:03:44.390 CC lib/ftl/utils/ftl_conf.o 00:03:44.390 CC lib/ftl/utils/ftl_md.o 00:03:44.390 CC lib/ftl/utils/ftl_mempool.o 00:03:44.390 CC lib/ftl/utils/ftl_bitmap.o 00:03:44.648 CC lib/ftl/utils/ftl_property.o 00:03:44.648 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:44.648 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:44.648 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:44.648 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:44.906 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:44.906 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:44.906 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:44.906 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:44.906 LIB libspdk_iscsi.a 00:03:44.906 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:44.906 LIB libspdk_vhost.a 00:03:44.906 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:44.906 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:44.906 SO libspdk_iscsi.so.8.0 00:03:45.163 SO libspdk_vhost.so.8.0 00:03:45.163 CC lib/ftl/base/ftl_base_dev.o 00:03:45.163 CC lib/ftl/base/ftl_base_bdev.o 00:03:45.163 CC lib/ftl/ftl_trace.o 00:03:45.163 SYMLINK libspdk_vhost.so 00:03:45.163 SYMLINK libspdk_iscsi.so 00:03:45.421 LIB libspdk_nvmf.a 00:03:45.421 LIB libspdk_ftl.a 00:03:45.421 SO libspdk_nvmf.so.18.1 00:03:45.680 SO libspdk_ftl.so.9.0 00:03:45.938 SYMLINK libspdk_nvmf.so 00:03:46.196 SYMLINK libspdk_ftl.so 00:03:46.455 CC module/env_dpdk/env_dpdk_rpc.o 00:03:46.455 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:46.455 CC module/accel/iaa/accel_iaa.o 00:03:46.455 CC module/blob/bdev/blob_bdev.o 00:03:46.455 CC module/keyring/file/keyring.o 00:03:46.455 CC module/sock/posix/posix.o 00:03:46.455 CC module/accel/error/accel_error.o 00:03:46.455 CC module/accel/ioat/accel_ioat.o 00:03:46.455 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:46.455 CC module/accel/dsa/accel_dsa.o 00:03:46.712 LIB libspdk_env_dpdk_rpc.a 00:03:46.712 SO libspdk_env_dpdk_rpc.so.6.0 00:03:46.712 CC module/keyring/file/keyring_rpc.o 00:03:46.712 SYMLINK libspdk_env_dpdk_rpc.so 00:03:46.712 CC module/accel/dsa/accel_dsa_rpc.o 00:03:46.712 LIB libspdk_scheduler_dpdk_governor.a 00:03:46.712 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:46.712 CC module/accel/ioat/accel_ioat_rpc.o 00:03:46.712 LIB libspdk_scheduler_dynamic.a 00:03:46.712 CC module/accel/iaa/accel_iaa_rpc.o 00:03:46.712 CC module/accel/error/accel_error_rpc.o 00:03:46.712 SO libspdk_scheduler_dynamic.so.4.0 00:03:46.712 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:46.712 LIB libspdk_keyring_file.a 00:03:46.969 LIB libspdk_blob_bdev.a 00:03:46.969 SYMLINK libspdk_scheduler_dynamic.so 00:03:46.969 LIB libspdk_accel_dsa.a 00:03:46.969 SO libspdk_keyring_file.so.1.0 00:03:46.969 LIB libspdk_accel_ioat.a 00:03:46.969 SO libspdk_blob_bdev.so.11.0 00:03:46.969 LIB libspdk_accel_iaa.a 00:03:46.969 SO libspdk_accel_dsa.so.5.0 00:03:46.969 SO libspdk_accel_ioat.so.6.0 00:03:46.969 SO libspdk_accel_iaa.so.3.0 00:03:46.969 SYMLINK libspdk_keyring_file.so 00:03:46.969 SYMLINK libspdk_blob_bdev.so 00:03:46.969 LIB libspdk_accel_error.a 00:03:46.969 SYMLINK libspdk_accel_dsa.so 00:03:46.969 SYMLINK libspdk_accel_ioat.so 00:03:46.969 SYMLINK libspdk_accel_iaa.so 00:03:46.969 SO libspdk_accel_error.so.2.0 00:03:46.969 CC module/scheduler/gscheduler/gscheduler.o 00:03:46.969 SYMLINK libspdk_accel_error.so 00:03:46.969 CC module/keyring/linux/keyring.o 00:03:47.226 CC module/blobfs/bdev/blobfs_bdev.o 00:03:47.226 LIB libspdk_scheduler_gscheduler.a 00:03:47.226 CC module/bdev/gpt/gpt.o 00:03:47.226 CC module/bdev/malloc/bdev_malloc.o 00:03:47.226 CC module/keyring/linux/keyring_rpc.o 00:03:47.226 CC module/bdev/delay/vbdev_delay.o 00:03:47.226 CC module/bdev/error/vbdev_error.o 00:03:47.226 CC module/bdev/lvol/vbdev_lvol.o 00:03:47.226 CC module/bdev/null/bdev_null.o 00:03:47.226 SO libspdk_scheduler_gscheduler.so.4.0 00:03:47.226 SYMLINK libspdk_scheduler_gscheduler.so 00:03:47.226 CC module/bdev/null/bdev_null_rpc.o 00:03:47.485 LIB libspdk_keyring_linux.a 00:03:47.485 SO libspdk_keyring_linux.so.1.0 00:03:47.485 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:47.485 CC module/bdev/gpt/vbdev_gpt.o 00:03:47.485 SYMLINK libspdk_keyring_linux.so 00:03:47.485 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:47.485 LIB libspdk_sock_posix.a 00:03:47.485 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:47.485 SO libspdk_sock_posix.so.6.0 00:03:47.485 CC module/bdev/error/vbdev_error_rpc.o 00:03:47.485 LIB libspdk_bdev_null.a 00:03:47.743 SO libspdk_bdev_null.so.6.0 00:03:47.743 LIB libspdk_blobfs_bdev.a 00:03:47.743 SYMLINK libspdk_sock_posix.so 00:03:47.743 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:47.743 SO libspdk_blobfs_bdev.so.6.0 00:03:47.743 SYMLINK libspdk_bdev_null.so 00:03:47.743 SYMLINK libspdk_blobfs_bdev.so 00:03:47.743 LIB libspdk_bdev_delay.a 00:03:47.743 LIB libspdk_bdev_error.a 00:03:47.743 SO libspdk_bdev_delay.so.6.0 00:03:47.743 LIB libspdk_bdev_gpt.a 00:03:47.743 SO libspdk_bdev_error.so.6.0 00:03:47.743 SO libspdk_bdev_gpt.so.6.0 00:03:48.001 SYMLINK libspdk_bdev_delay.so 00:03:48.001 CC module/bdev/passthru/vbdev_passthru.o 00:03:48.001 CC module/bdev/nvme/bdev_nvme.o 00:03:48.001 LIB libspdk_bdev_malloc.a 00:03:48.001 SYMLINK libspdk_bdev_error.so 00:03:48.001 CC module/bdev/raid/bdev_raid.o 00:03:48.001 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:48.001 CC module/bdev/split/vbdev_split.o 00:03:48.001 SO libspdk_bdev_malloc.so.6.0 00:03:48.001 LIB libspdk_bdev_lvol.a 00:03:48.001 SYMLINK libspdk_bdev_gpt.so 00:03:48.001 SO libspdk_bdev_lvol.so.6.0 00:03:48.001 SYMLINK libspdk_bdev_malloc.so 00:03:48.001 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:48.001 CC module/bdev/xnvme/bdev_xnvme.o 00:03:48.001 SYMLINK libspdk_bdev_lvol.so 00:03:48.001 CC module/bdev/split/vbdev_split_rpc.o 00:03:48.259 CC module/bdev/aio/bdev_aio.o 00:03:48.259 CC module/bdev/ftl/bdev_ftl.o 00:03:48.259 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:48.259 LIB libspdk_bdev_split.a 00:03:48.259 SO libspdk_bdev_split.so.6.0 00:03:48.518 CC module/bdev/iscsi/bdev_iscsi.o 00:03:48.518 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:48.518 SYMLINK libspdk_bdev_split.so 00:03:48.518 LIB libspdk_bdev_passthru.a 00:03:48.518 SO libspdk_bdev_passthru.so.6.0 00:03:48.518 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:48.518 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:48.518 SYMLINK libspdk_bdev_passthru.so 00:03:48.518 CC module/bdev/aio/bdev_aio_rpc.o 00:03:48.518 CC module/bdev/nvme/nvme_rpc.o 00:03:48.518 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:48.518 LIB libspdk_bdev_xnvme.a 00:03:48.518 SO libspdk_bdev_xnvme.so.3.0 00:03:48.776 LIB libspdk_bdev_zone_block.a 00:03:48.776 SO libspdk_bdev_zone_block.so.6.0 00:03:48.776 SYMLINK libspdk_bdev_xnvme.so 00:03:48.776 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:48.776 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:48.776 LIB libspdk_bdev_aio.a 00:03:48.776 SO libspdk_bdev_aio.so.6.0 00:03:48.776 SYMLINK libspdk_bdev_zone_block.so 00:03:48.776 LIB libspdk_bdev_ftl.a 00:03:48.776 CC module/bdev/raid/bdev_raid_rpc.o 00:03:48.776 SO libspdk_bdev_ftl.so.6.0 00:03:48.776 SYMLINK libspdk_bdev_aio.so 00:03:48.776 CC module/bdev/nvme/bdev_mdns_client.o 00:03:48.776 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:48.776 CC module/bdev/nvme/vbdev_opal.o 00:03:48.776 SYMLINK libspdk_bdev_ftl.so 00:03:48.776 CC module/bdev/raid/bdev_raid_sb.o 00:03:49.035 CC module/bdev/raid/raid0.o 00:03:49.035 LIB libspdk_bdev_iscsi.a 00:03:49.035 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:49.035 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:49.035 CC module/bdev/raid/raid1.o 00:03:49.035 SO libspdk_bdev_iscsi.so.6.0 00:03:49.035 SYMLINK libspdk_bdev_iscsi.so 00:03:49.035 CC module/bdev/raid/concat.o 00:03:49.295 LIB libspdk_bdev_virtio.a 00:03:49.295 SO libspdk_bdev_virtio.so.6.0 00:03:49.295 SYMLINK libspdk_bdev_virtio.so 00:03:49.295 LIB libspdk_bdev_raid.a 00:03:49.554 SO libspdk_bdev_raid.so.6.0 00:03:49.554 SYMLINK libspdk_bdev_raid.so 00:03:50.491 LIB libspdk_bdev_nvme.a 00:03:50.491 SO libspdk_bdev_nvme.so.7.0 00:03:50.750 SYMLINK libspdk_bdev_nvme.so 00:03:51.317 CC module/event/subsystems/scheduler/scheduler.o 00:03:51.317 CC module/event/subsystems/vmd/vmd.o 00:03:51.317 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:51.317 CC module/event/subsystems/keyring/keyring.o 00:03:51.317 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:51.317 CC module/event/subsystems/iobuf/iobuf.o 00:03:51.317 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:51.317 CC module/event/subsystems/sock/sock.o 00:03:51.317 LIB libspdk_event_keyring.a 00:03:51.317 LIB libspdk_event_scheduler.a 00:03:51.317 LIB libspdk_event_vhost_blk.a 00:03:51.317 LIB libspdk_event_vmd.a 00:03:51.317 LIB libspdk_event_sock.a 00:03:51.317 SO libspdk_event_keyring.so.1.0 00:03:51.317 LIB libspdk_event_iobuf.a 00:03:51.317 SO libspdk_event_scheduler.so.4.0 00:03:51.575 SO libspdk_event_vhost_blk.so.3.0 00:03:51.575 SO libspdk_event_sock.so.5.0 00:03:51.575 SO libspdk_event_vmd.so.6.0 00:03:51.575 SO libspdk_event_iobuf.so.3.0 00:03:51.575 SYMLINK libspdk_event_scheduler.so 00:03:51.575 SYMLINK libspdk_event_keyring.so 00:03:51.575 SYMLINK libspdk_event_vhost_blk.so 00:03:51.575 SYMLINK libspdk_event_sock.so 00:03:51.575 SYMLINK libspdk_event_vmd.so 00:03:51.575 SYMLINK libspdk_event_iobuf.so 00:03:51.837 CC module/event/subsystems/accel/accel.o 00:03:51.837 LIB libspdk_event_accel.a 00:03:52.096 SO libspdk_event_accel.so.6.0 00:03:52.096 SYMLINK libspdk_event_accel.so 00:03:52.353 CC module/event/subsystems/bdev/bdev.o 00:03:52.610 LIB libspdk_event_bdev.a 00:03:52.610 SO libspdk_event_bdev.so.6.0 00:03:52.610 SYMLINK libspdk_event_bdev.so 00:03:52.868 CC module/event/subsystems/nbd/nbd.o 00:03:52.868 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:52.868 CC module/event/subsystems/scsi/scsi.o 00:03:52.868 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:52.868 CC module/event/subsystems/ublk/ublk.o 00:03:53.128 LIB libspdk_event_nbd.a 00:03:53.128 LIB libspdk_event_ublk.a 00:03:53.128 LIB libspdk_event_scsi.a 00:03:53.128 SO libspdk_event_nbd.so.6.0 00:03:53.128 SO libspdk_event_ublk.so.3.0 00:03:53.128 SO libspdk_event_scsi.so.6.0 00:03:53.128 SYMLINK libspdk_event_nbd.so 00:03:53.128 LIB libspdk_event_nvmf.a 00:03:53.128 SYMLINK libspdk_event_scsi.so 00:03:53.128 SYMLINK libspdk_event_ublk.so 00:03:53.128 SO libspdk_event_nvmf.so.6.0 00:03:53.387 SYMLINK libspdk_event_nvmf.so 00:03:53.387 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:53.387 CC module/event/subsystems/iscsi/iscsi.o 00:03:53.645 LIB libspdk_event_vhost_scsi.a 00:03:53.645 SO libspdk_event_vhost_scsi.so.3.0 00:03:53.645 LIB libspdk_event_iscsi.a 00:03:53.645 SO libspdk_event_iscsi.so.6.0 00:03:53.645 SYMLINK libspdk_event_vhost_scsi.so 00:03:53.904 SYMLINK libspdk_event_iscsi.so 00:03:53.904 SO libspdk.so.6.0 00:03:53.904 SYMLINK libspdk.so 00:03:54.163 CC app/trace_record/trace_record.o 00:03:54.163 CXX app/trace/trace.o 00:03:54.163 CC test/rpc_client/rpc_client_test.o 00:03:54.163 TEST_HEADER include/spdk/accel.h 00:03:54.163 TEST_HEADER include/spdk/accel_module.h 00:03:54.163 TEST_HEADER include/spdk/assert.h 00:03:54.163 TEST_HEADER include/spdk/barrier.h 00:03:54.163 TEST_HEADER include/spdk/base64.h 00:03:54.163 TEST_HEADER include/spdk/bdev.h 00:03:54.163 TEST_HEADER include/spdk/bdev_module.h 00:03:54.163 TEST_HEADER include/spdk/bdev_zone.h 00:03:54.163 TEST_HEADER include/spdk/bit_array.h 00:03:54.163 TEST_HEADER include/spdk/bit_pool.h 00:03:54.163 TEST_HEADER include/spdk/blob_bdev.h 00:03:54.163 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:54.163 TEST_HEADER include/spdk/blobfs.h 00:03:54.163 CC app/nvmf_tgt/nvmf_main.o 00:03:54.163 TEST_HEADER include/spdk/blob.h 00:03:54.163 TEST_HEADER include/spdk/conf.h 00:03:54.163 TEST_HEADER include/spdk/config.h 00:03:54.163 TEST_HEADER include/spdk/cpuset.h 00:03:54.163 TEST_HEADER include/spdk/crc16.h 00:03:54.163 TEST_HEADER include/spdk/crc32.h 00:03:54.163 TEST_HEADER include/spdk/crc64.h 00:03:54.163 TEST_HEADER include/spdk/dif.h 00:03:54.163 TEST_HEADER include/spdk/dma.h 00:03:54.163 TEST_HEADER include/spdk/endian.h 00:03:54.163 TEST_HEADER include/spdk/env_dpdk.h 00:03:54.163 TEST_HEADER include/spdk/env.h 00:03:54.163 TEST_HEADER include/spdk/event.h 00:03:54.163 TEST_HEADER include/spdk/fd_group.h 00:03:54.421 TEST_HEADER include/spdk/fd.h 00:03:54.421 CC examples/util/zipf/zipf.o 00:03:54.421 CC test/thread/poller_perf/poller_perf.o 00:03:54.421 TEST_HEADER include/spdk/file.h 00:03:54.421 TEST_HEADER include/spdk/ftl.h 00:03:54.421 TEST_HEADER include/spdk/gpt_spec.h 00:03:54.421 TEST_HEADER include/spdk/hexlify.h 00:03:54.421 TEST_HEADER include/spdk/histogram_data.h 00:03:54.421 TEST_HEADER include/spdk/idxd.h 00:03:54.421 TEST_HEADER include/spdk/idxd_spec.h 00:03:54.421 TEST_HEADER include/spdk/init.h 00:03:54.421 TEST_HEADER include/spdk/ioat.h 00:03:54.421 TEST_HEADER include/spdk/ioat_spec.h 00:03:54.421 TEST_HEADER include/spdk/iscsi_spec.h 00:03:54.421 CC test/dma/test_dma/test_dma.o 00:03:54.421 TEST_HEADER include/spdk/json.h 00:03:54.421 TEST_HEADER include/spdk/jsonrpc.h 00:03:54.421 TEST_HEADER include/spdk/keyring.h 00:03:54.421 TEST_HEADER include/spdk/keyring_module.h 00:03:54.421 TEST_HEADER include/spdk/likely.h 00:03:54.421 TEST_HEADER include/spdk/log.h 00:03:54.421 TEST_HEADER include/spdk/lvol.h 00:03:54.421 TEST_HEADER include/spdk/memory.h 00:03:54.421 TEST_HEADER include/spdk/mmio.h 00:03:54.421 TEST_HEADER include/spdk/nbd.h 00:03:54.421 TEST_HEADER include/spdk/notify.h 00:03:54.421 TEST_HEADER include/spdk/nvme.h 00:03:54.421 TEST_HEADER include/spdk/nvme_intel.h 00:03:54.421 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:54.421 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:54.421 TEST_HEADER include/spdk/nvme_spec.h 00:03:54.421 TEST_HEADER include/spdk/nvme_zns.h 00:03:54.421 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:54.421 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:54.421 TEST_HEADER include/spdk/nvmf.h 00:03:54.421 TEST_HEADER include/spdk/nvmf_spec.h 00:03:54.421 CC test/app/bdev_svc/bdev_svc.o 00:03:54.421 TEST_HEADER include/spdk/nvmf_transport.h 00:03:54.421 TEST_HEADER include/spdk/opal.h 00:03:54.421 TEST_HEADER include/spdk/opal_spec.h 00:03:54.422 TEST_HEADER include/spdk/pci_ids.h 00:03:54.422 TEST_HEADER include/spdk/pipe.h 00:03:54.422 TEST_HEADER include/spdk/queue.h 00:03:54.422 TEST_HEADER include/spdk/reduce.h 00:03:54.422 CC test/env/mem_callbacks/mem_callbacks.o 00:03:54.422 TEST_HEADER include/spdk/rpc.h 00:03:54.422 TEST_HEADER include/spdk/scheduler.h 00:03:54.422 TEST_HEADER include/spdk/scsi.h 00:03:54.422 TEST_HEADER include/spdk/scsi_spec.h 00:03:54.422 TEST_HEADER include/spdk/sock.h 00:03:54.422 TEST_HEADER include/spdk/stdinc.h 00:03:54.422 TEST_HEADER include/spdk/string.h 00:03:54.422 TEST_HEADER include/spdk/thread.h 00:03:54.422 TEST_HEADER include/spdk/trace.h 00:03:54.422 TEST_HEADER include/spdk/trace_parser.h 00:03:54.422 TEST_HEADER include/spdk/tree.h 00:03:54.422 TEST_HEADER include/spdk/ublk.h 00:03:54.422 LINK rpc_client_test 00:03:54.422 TEST_HEADER include/spdk/util.h 00:03:54.422 TEST_HEADER include/spdk/uuid.h 00:03:54.422 TEST_HEADER include/spdk/version.h 00:03:54.422 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:54.422 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:54.422 TEST_HEADER include/spdk/vhost.h 00:03:54.422 TEST_HEADER include/spdk/vmd.h 00:03:54.422 TEST_HEADER include/spdk/xor.h 00:03:54.422 TEST_HEADER include/spdk/zipf.h 00:03:54.422 LINK poller_perf 00:03:54.422 CXX test/cpp_headers/accel.o 00:03:54.422 LINK zipf 00:03:54.422 LINK spdk_trace_record 00:03:54.422 LINK nvmf_tgt 00:03:54.680 LINK bdev_svc 00:03:54.680 CXX test/cpp_headers/accel_module.o 00:03:54.680 LINK spdk_trace 00:03:54.680 CC test/env/vtophys/vtophys.o 00:03:54.680 CXX test/cpp_headers/assert.o 00:03:54.680 CC app/iscsi_tgt/iscsi_tgt.o 00:03:54.939 LINK test_dma 00:03:54.939 CC app/spdk_tgt/spdk_tgt.o 00:03:54.939 CC examples/ioat/perf/perf.o 00:03:54.939 LINK vtophys 00:03:54.939 CXX test/cpp_headers/barrier.o 00:03:54.939 CC examples/ioat/verify/verify.o 00:03:54.939 CC app/spdk_lspci/spdk_lspci.o 00:03:54.939 LINK iscsi_tgt 00:03:54.939 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:55.197 LINK mem_callbacks 00:03:55.197 LINK spdk_tgt 00:03:55.197 CXX test/cpp_headers/base64.o 00:03:55.197 CC app/spdk_nvme_perf/perf.o 00:03:55.197 LINK spdk_lspci 00:03:55.197 LINK ioat_perf 00:03:55.197 CC app/spdk_nvme_identify/identify.o 00:03:55.197 LINK verify 00:03:55.456 CXX test/cpp_headers/bdev.o 00:03:55.456 CXX test/cpp_headers/bdev_module.o 00:03:55.456 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:55.456 CXX test/cpp_headers/bdev_zone.o 00:03:55.456 CC test/event/event_perf/event_perf.o 00:03:55.456 CC test/nvme/aer/aer.o 00:03:55.456 CC examples/vmd/lsvmd/lsvmd.o 00:03:55.456 LINK nvme_fuzz 00:03:55.456 LINK env_dpdk_post_init 00:03:55.456 CXX test/cpp_headers/bit_array.o 00:03:55.714 LINK event_perf 00:03:55.714 CC test/app/histogram_perf/histogram_perf.o 00:03:55.714 CC test/app/jsoncat/jsoncat.o 00:03:55.714 LINK lsvmd 00:03:55.714 CXX test/cpp_headers/bit_pool.o 00:03:55.714 LINK histogram_perf 00:03:55.714 LINK jsoncat 00:03:55.714 CC test/env/memory/memory_ut.o 00:03:55.714 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:55.714 CC test/event/reactor/reactor.o 00:03:55.972 LINK aer 00:03:55.972 CC examples/vmd/led/led.o 00:03:55.972 CXX test/cpp_headers/blob_bdev.o 00:03:55.972 CXX test/cpp_headers/blobfs_bdev.o 00:03:55.972 LINK reactor 00:03:55.972 LINK led 00:03:56.230 CC test/nvme/reset/reset.o 00:03:56.230 CXX test/cpp_headers/blobfs.o 00:03:56.230 CC test/accel/dif/dif.o 00:03:56.230 LINK spdk_nvme_perf 00:03:56.230 LINK spdk_nvme_identify 00:03:56.230 CC test/nvme/sgl/sgl.o 00:03:56.230 CC test/event/reactor_perf/reactor_perf.o 00:03:56.230 CXX test/cpp_headers/blob.o 00:03:56.488 LINK reactor_perf 00:03:56.488 LINK reset 00:03:56.488 CC examples/idxd/perf/perf.o 00:03:56.488 CC app/spdk_nvme_discover/discovery_aer.o 00:03:56.488 CC test/nvme/e2edp/nvme_dp.o 00:03:56.488 CXX test/cpp_headers/conf.o 00:03:56.488 LINK sgl 00:03:56.488 CXX test/cpp_headers/config.o 00:03:56.746 CC test/event/app_repeat/app_repeat.o 00:03:56.746 LINK spdk_nvme_discover 00:03:56.746 CXX test/cpp_headers/cpuset.o 00:03:56.746 CXX test/cpp_headers/crc16.o 00:03:56.746 LINK dif 00:03:56.746 LINK nvme_dp 00:03:56.746 LINK app_repeat 00:03:56.746 LINK idxd_perf 00:03:56.746 CC test/blobfs/mkfs/mkfs.o 00:03:57.004 CXX test/cpp_headers/crc32.o 00:03:57.004 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:57.004 CC app/spdk_top/spdk_top.o 00:03:57.004 CXX test/cpp_headers/crc64.o 00:03:57.004 LINK mkfs 00:03:57.004 LINK memory_ut 00:03:57.004 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:57.004 CC test/nvme/overhead/overhead.o 00:03:57.263 CC test/event/scheduler/scheduler.o 00:03:57.263 CC examples/thread/thread/thread_ex.o 00:03:57.263 LINK interrupt_tgt 00:03:57.263 CXX test/cpp_headers/dif.o 00:03:57.263 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:57.263 CXX test/cpp_headers/dma.o 00:03:57.523 CXX test/cpp_headers/endian.o 00:03:57.523 CC test/env/pci/pci_ut.o 00:03:57.523 LINK scheduler 00:03:57.523 LINK overhead 00:03:57.523 LINK thread 00:03:57.523 CC test/app/stub/stub.o 00:03:57.523 CXX test/cpp_headers/env_dpdk.o 00:03:57.523 CC test/nvme/err_injection/err_injection.o 00:03:57.523 CXX test/cpp_headers/env.o 00:03:57.523 CXX test/cpp_headers/event.o 00:03:57.782 LINK stub 00:03:57.782 LINK vhost_fuzz 00:03:57.782 LINK err_injection 00:03:57.782 CXX test/cpp_headers/fd_group.o 00:03:57.782 CXX test/cpp_headers/fd.o 00:03:57.782 CC examples/sock/hello_world/hello_sock.o 00:03:57.782 LINK pci_ut 00:03:58.041 CC examples/accel/perf/accel_perf.o 00:03:58.041 LINK iscsi_fuzz 00:03:58.041 CC examples/blob/hello_world/hello_blob.o 00:03:58.041 CXX test/cpp_headers/file.o 00:03:58.041 CC test/nvme/startup/startup.o 00:03:58.041 CC test/nvme/reserve/reserve.o 00:03:58.041 CC examples/blob/cli/blobcli.o 00:03:58.041 LINK spdk_top 00:03:58.300 CXX test/cpp_headers/ftl.o 00:03:58.300 LINK hello_sock 00:03:58.300 LINK startup 00:03:58.300 LINK hello_blob 00:03:58.300 LINK reserve 00:03:58.300 CC app/vhost/vhost.o 00:03:58.300 CXX test/cpp_headers/gpt_spec.o 00:03:58.300 CC app/spdk_dd/spdk_dd.o 00:03:58.300 CC test/nvme/simple_copy/simple_copy.o 00:03:58.561 CXX test/cpp_headers/hexlify.o 00:03:58.561 CC app/fio/nvme/fio_plugin.o 00:03:58.561 CXX test/cpp_headers/histogram_data.o 00:03:58.561 CXX test/cpp_headers/idxd.o 00:03:58.561 LINK accel_perf 00:03:58.561 LINK vhost 00:03:58.818 CXX test/cpp_headers/idxd_spec.o 00:03:58.818 LINK blobcli 00:03:58.818 LINK simple_copy 00:03:58.818 CXX test/cpp_headers/init.o 00:03:58.818 CC examples/nvme/hello_world/hello_world.o 00:03:58.818 LINK spdk_dd 00:03:58.818 CC app/fio/bdev/fio_plugin.o 00:03:58.818 CC test/lvol/esnap/esnap.o 00:03:58.818 CXX test/cpp_headers/ioat.o 00:03:58.818 CC test/bdev/bdevio/bdevio.o 00:03:59.076 CC examples/nvme/reconnect/reconnect.o 00:03:59.076 CC test/nvme/connect_stress/connect_stress.o 00:03:59.076 LINK hello_world 00:03:59.076 CC test/nvme/boot_partition/boot_partition.o 00:03:59.076 CXX test/cpp_headers/ioat_spec.o 00:03:59.076 CC test/nvme/compliance/nvme_compliance.o 00:03:59.076 LINK spdk_nvme 00:03:59.076 LINK connect_stress 00:03:59.335 LINK boot_partition 00:03:59.335 CXX test/cpp_headers/iscsi_spec.o 00:03:59.335 CC test/nvme/fused_ordering/fused_ordering.o 00:03:59.335 CXX test/cpp_headers/json.o 00:03:59.335 CXX test/cpp_headers/jsonrpc.o 00:03:59.335 CXX test/cpp_headers/keyring.o 00:03:59.335 LINK bdevio 00:03:59.335 LINK reconnect 00:03:59.593 LINK spdk_bdev 00:03:59.593 LINK fused_ordering 00:03:59.593 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:59.593 CXX test/cpp_headers/keyring_module.o 00:03:59.593 LINK nvme_compliance 00:03:59.593 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:59.593 CXX test/cpp_headers/likely.o 00:03:59.593 CC test/nvme/cuse/cuse.o 00:03:59.593 CC test/nvme/fdp/fdp.o 00:03:59.852 CC examples/nvme/arbitration/arbitration.o 00:03:59.852 CXX test/cpp_headers/log.o 00:03:59.852 LINK doorbell_aers 00:03:59.852 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:59.852 CC examples/nvme/hotplug/hotplug.o 00:03:59.852 CC examples/bdev/hello_world/hello_bdev.o 00:03:59.852 CXX test/cpp_headers/lvol.o 00:04:00.110 LINK cmb_copy 00:04:00.110 LINK fdp 00:04:00.110 CC examples/nvme/abort/abort.o 00:04:00.110 LINK hotplug 00:04:00.110 LINK hello_bdev 00:04:00.110 CXX test/cpp_headers/memory.o 00:04:00.110 LINK nvme_manage 00:04:00.110 LINK arbitration 00:04:00.369 CXX test/cpp_headers/mmio.o 00:04:00.369 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:00.369 CXX test/cpp_headers/nbd.o 00:04:00.369 CXX test/cpp_headers/notify.o 00:04:00.369 CXX test/cpp_headers/nvme.o 00:04:00.369 CXX test/cpp_headers/nvme_intel.o 00:04:00.369 CXX test/cpp_headers/nvme_ocssd.o 00:04:00.369 CC examples/bdev/bdevperf/bdevperf.o 00:04:00.369 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:00.369 LINK pmr_persistence 00:04:00.627 CXX test/cpp_headers/nvme_spec.o 00:04:00.627 LINK abort 00:04:00.627 CXX test/cpp_headers/nvme_zns.o 00:04:00.627 CXX test/cpp_headers/nvmf_cmd.o 00:04:00.627 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:00.627 CXX test/cpp_headers/nvmf.o 00:04:00.627 CXX test/cpp_headers/nvmf_spec.o 00:04:00.627 CXX test/cpp_headers/nvmf_transport.o 00:04:00.627 CXX test/cpp_headers/opal.o 00:04:00.627 CXX test/cpp_headers/opal_spec.o 00:04:00.885 CXX test/cpp_headers/pci_ids.o 00:04:00.885 CXX test/cpp_headers/pipe.o 00:04:00.885 CXX test/cpp_headers/queue.o 00:04:00.885 CXX test/cpp_headers/reduce.o 00:04:00.885 CXX test/cpp_headers/rpc.o 00:04:00.885 CXX test/cpp_headers/scheduler.o 00:04:00.885 CXX test/cpp_headers/scsi.o 00:04:00.885 CXX test/cpp_headers/scsi_spec.o 00:04:00.885 CXX test/cpp_headers/sock.o 00:04:00.885 CXX test/cpp_headers/stdinc.o 00:04:00.885 CXX test/cpp_headers/string.o 00:04:01.143 CXX test/cpp_headers/thread.o 00:04:01.143 CXX test/cpp_headers/trace.o 00:04:01.143 CXX test/cpp_headers/trace_parser.o 00:04:01.143 CXX test/cpp_headers/tree.o 00:04:01.143 CXX test/cpp_headers/ublk.o 00:04:01.143 CXX test/cpp_headers/util.o 00:04:01.143 CXX test/cpp_headers/uuid.o 00:04:01.143 CXX test/cpp_headers/version.o 00:04:01.143 LINK cuse 00:04:01.143 CXX test/cpp_headers/vfio_user_pci.o 00:04:01.143 CXX test/cpp_headers/vfio_user_spec.o 00:04:01.143 CXX test/cpp_headers/vhost.o 00:04:01.402 CXX test/cpp_headers/vmd.o 00:04:01.402 CXX test/cpp_headers/xor.o 00:04:01.402 CXX test/cpp_headers/zipf.o 00:04:01.402 LINK bdevperf 00:04:01.967 CC examples/nvmf/nvmf/nvmf.o 00:04:02.227 LINK nvmf 00:04:05.512 LINK esnap 00:04:05.512 00:04:05.512 real 1m16.814s 00:04:05.512 user 7m31.546s 00:04:05.512 sys 1m33.257s 00:04:05.512 21:03:16 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:05.512 21:03:16 make -- common/autotest_common.sh@10 -- $ set +x 00:04:05.512 ************************************ 00:04:05.512 END TEST make 00:04:05.512 ************************************ 00:04:05.512 21:03:16 -- common/autotest_common.sh@1142 -- $ return 0 00:04:05.512 21:03:16 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:05.512 21:03:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:05.512 21:03:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:05.512 21:03:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.512 21:03:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:05.512 21:03:17 -- pm/common@44 -- $ pid=5227 00:04:05.512 21:03:17 -- pm/common@50 -- $ kill -TERM 5227 00:04:05.512 21:03:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.512 21:03:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:05.512 21:03:17 -- pm/common@44 -- $ pid=5229 00:04:05.512 21:03:17 -- pm/common@50 -- $ kill -TERM 5229 00:04:05.770 21:03:17 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:05.770 21:03:17 -- nvmf/common.sh@7 -- # uname -s 00:04:05.770 21:03:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:05.770 21:03:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:05.770 21:03:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:05.770 21:03:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:05.770 21:03:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:05.770 21:03:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:05.770 21:03:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:05.770 21:03:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:05.770 21:03:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:05.770 21:03:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:05.770 21:03:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:98373986-152d-4edd-b0f9-b4d926b76024 00:04:05.770 21:03:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=98373986-152d-4edd-b0f9-b4d926b76024 00:04:05.770 21:03:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:05.770 21:03:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:05.770 21:03:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:05.770 21:03:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:05.770 21:03:17 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:05.770 21:03:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:05.770 21:03:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:05.770 21:03:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:05.770 21:03:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.770 21:03:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.770 21:03:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.770 21:03:17 -- paths/export.sh@5 -- # export PATH 00:04:05.770 21:03:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.770 21:03:17 -- nvmf/common.sh@47 -- # : 0 00:04:05.770 21:03:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:05.770 21:03:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:05.770 21:03:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:05.770 21:03:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:05.770 21:03:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:05.770 21:03:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:05.770 21:03:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:05.770 21:03:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:05.770 21:03:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:05.770 21:03:17 -- spdk/autotest.sh@32 -- # uname -s 00:04:05.770 21:03:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:05.770 21:03:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:05.770 21:03:17 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:05.770 21:03:17 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:05.770 21:03:17 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:05.770 21:03:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:05.770 21:03:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:05.770 21:03:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:05.770 21:03:17 -- spdk/autotest.sh@48 -- # udevadm_pid=53730 00:04:05.770 21:03:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:05.770 21:03:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:05.770 21:03:17 -- pm/common@17 -- # local monitor 00:04:05.770 21:03:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.770 21:03:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.770 21:03:17 -- pm/common@25 -- # sleep 1 00:04:05.770 21:03:17 -- pm/common@21 -- # date +%s 00:04:05.770 21:03:17 -- pm/common@21 -- # date +%s 00:04:05.770 21:03:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720990997 00:04:05.770 21:03:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720990997 00:04:05.771 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720990997_collect-vmstat.pm.log 00:04:05.771 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720990997_collect-cpu-load.pm.log 00:04:06.703 21:03:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:06.703 21:03:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:06.703 21:03:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:06.703 21:03:18 -- common/autotest_common.sh@10 -- # set +x 00:04:06.703 21:03:18 -- spdk/autotest.sh@59 -- # create_test_list 00:04:06.703 21:03:18 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:06.703 21:03:18 -- common/autotest_common.sh@10 -- # set +x 00:04:06.703 21:03:18 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:06.703 21:03:18 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:06.703 21:03:18 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:06.703 21:03:18 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:06.703 21:03:18 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:06.703 21:03:18 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:06.703 21:03:18 -- common/autotest_common.sh@1455 -- # uname 00:04:06.703 21:03:18 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:06.703 21:03:18 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:06.703 21:03:18 -- common/autotest_common.sh@1475 -- # uname 00:04:06.703 21:03:18 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:06.703 21:03:18 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:06.703 21:03:18 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:06.703 21:03:18 -- spdk/autotest.sh@72 -- # hash lcov 00:04:06.703 21:03:18 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:06.703 21:03:18 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:06.703 --rc lcov_branch_coverage=1 00:04:06.703 --rc lcov_function_coverage=1 00:04:06.703 --rc genhtml_branch_coverage=1 00:04:06.703 --rc genhtml_function_coverage=1 00:04:06.703 --rc genhtml_legend=1 00:04:06.703 --rc geninfo_all_blocks=1 00:04:06.703 ' 00:04:06.703 21:03:18 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:06.703 --rc lcov_branch_coverage=1 00:04:06.703 --rc lcov_function_coverage=1 00:04:06.703 --rc genhtml_branch_coverage=1 00:04:06.703 --rc genhtml_function_coverage=1 00:04:06.703 --rc genhtml_legend=1 00:04:06.703 --rc geninfo_all_blocks=1 00:04:06.703 ' 00:04:06.703 21:03:18 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:06.703 --rc lcov_branch_coverage=1 00:04:06.703 --rc lcov_function_coverage=1 00:04:06.703 --rc genhtml_branch_coverage=1 00:04:06.703 --rc genhtml_function_coverage=1 00:04:06.703 --rc genhtml_legend=1 00:04:06.703 --rc geninfo_all_blocks=1 00:04:06.703 --no-external' 00:04:06.703 21:03:18 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:06.703 --rc lcov_branch_coverage=1 00:04:06.703 --rc lcov_function_coverage=1 00:04:06.703 --rc genhtml_branch_coverage=1 00:04:06.703 --rc genhtml_function_coverage=1 00:04:06.703 --rc genhtml_legend=1 00:04:06.703 --rc geninfo_all_blocks=1 00:04:06.703 --no-external' 00:04:06.703 21:03:18 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:06.961 lcov: LCOV version 1.14 00:04:06.962 21:03:18 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:21.856 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:21.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:31.856 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:31.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:31.857 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:31.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:36.044 21:03:46 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:36.044 21:03:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:36.044 21:03:46 -- common/autotest_common.sh@10 -- # set +x 00:04:36.044 21:03:46 -- spdk/autotest.sh@91 -- # rm -f 00:04:36.044 21:03:46 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:36.044 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:36.302 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:36.302 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:36.302 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:36.302 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:36.302 21:03:47 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:36.302 21:03:47 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:36.302 21:03:47 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:36.302 21:03:47 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:36.302 21:03:47 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:36.302 21:03:47 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:36.302 21:03:47 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:36.302 21:03:47 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:36.302 21:03:47 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:36.302 21:03:47 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:36.302 21:03:47 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:36.302 21:03:47 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:36.302 21:03:47 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:36.302 21:03:47 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:36.302 21:03:47 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:36.302 21:03:47 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:04:36.302 21:03:47 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:04:36.302 21:03:47 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:36.302 21:03:47 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:36.302 21:03:47 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:36.302 21:03:47 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:04:36.302 21:03:47 -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:04:36.302 21:03:47 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:36.302 21:03:47 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:36.302 21:03:47 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:36.302 21:03:47 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:04:36.302 21:03:47 -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:04:36.302 21:03:47 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:36.302 21:03:47 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:36.302 21:03:47 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:36.302 21:03:47 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:04:36.302 21:03:47 -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:04:36.302 21:03:47 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:36.302 21:03:47 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:36.302 21:03:47 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:36.302 21:03:47 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:04:36.302 21:03:47 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:04:36.302 21:03:47 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:36.302 21:03:47 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:36.302 21:03:47 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:36.302 21:03:47 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:36.302 21:03:47 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:36.302 21:03:47 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:36.302 21:03:47 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:36.302 21:03:47 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:36.561 No valid GPT data, bailing 00:04:36.561 21:03:47 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:36.561 21:03:47 -- scripts/common.sh@391 -- # pt= 00:04:36.561 21:03:47 -- scripts/common.sh@392 -- # return 1 00:04:36.561 21:03:47 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:36.561 1+0 records in 00:04:36.561 1+0 records out 00:04:36.561 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129484 s, 81.0 MB/s 00:04:36.561 21:03:47 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:36.561 21:03:47 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:36.561 21:03:47 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:36.561 21:03:47 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:36.561 21:03:47 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:36.561 No valid GPT data, bailing 00:04:36.561 21:03:47 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:36.561 21:03:47 -- scripts/common.sh@391 -- # pt= 00:04:36.561 21:03:47 -- scripts/common.sh@392 -- # return 1 00:04:36.561 21:03:47 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:36.561 1+0 records in 00:04:36.561 1+0 records out 00:04:36.561 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00324319 s, 323 MB/s 00:04:36.561 21:03:47 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:36.561 21:03:47 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:36.561 21:03:47 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:04:36.561 21:03:47 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:04:36.561 21:03:47 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:36.561 No valid GPT data, bailing 00:04:36.561 21:03:48 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:36.561 21:03:48 -- scripts/common.sh@391 -- # pt= 00:04:36.561 21:03:48 -- scripts/common.sh@392 -- # return 1 00:04:36.561 21:03:48 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:36.561 1+0 records in 00:04:36.561 1+0 records out 00:04:36.561 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00378675 s, 277 MB/s 00:04:36.561 21:03:48 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:36.561 21:03:48 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:36.561 21:03:48 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:04:36.561 21:03:48 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:04:36.561 21:03:48 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:36.820 No valid GPT data, bailing 00:04:36.820 21:03:48 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:36.820 21:03:48 -- scripts/common.sh@391 -- # pt= 00:04:36.820 21:03:48 -- scripts/common.sh@392 -- # return 1 00:04:36.820 21:03:48 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:36.820 1+0 records in 00:04:36.820 1+0 records out 00:04:36.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513998 s, 204 MB/s 00:04:36.820 21:03:48 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:36.820 21:03:48 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:36.820 21:03:48 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:04:36.820 21:03:48 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:04:36.820 21:03:48 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:36.820 No valid GPT data, bailing 00:04:36.820 21:03:48 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:36.820 21:03:48 -- scripts/common.sh@391 -- # pt= 00:04:36.820 21:03:48 -- scripts/common.sh@392 -- # return 1 00:04:36.820 21:03:48 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:36.820 1+0 records in 00:04:36.820 1+0 records out 00:04:36.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00427611 s, 245 MB/s 00:04:36.820 21:03:48 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:36.820 21:03:48 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:36.820 21:03:48 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:04:36.820 21:03:48 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:04:36.820 21:03:48 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:36.820 No valid GPT data, bailing 00:04:36.820 21:03:48 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:36.820 21:03:48 -- scripts/common.sh@391 -- # pt= 00:04:36.820 21:03:48 -- scripts/common.sh@392 -- # return 1 00:04:36.820 21:03:48 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:36.820 1+0 records in 00:04:36.820 1+0 records out 00:04:36.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00500144 s, 210 MB/s 00:04:36.820 21:03:48 -- spdk/autotest.sh@118 -- # sync 00:04:36.820 21:03:48 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:36.820 21:03:48 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:36.820 21:03:48 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:38.756 21:03:50 -- spdk/autotest.sh@124 -- # uname -s 00:04:38.756 21:03:50 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:38.756 21:03:50 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:38.756 21:03:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.756 21:03:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.756 21:03:50 -- common/autotest_common.sh@10 -- # set +x 00:04:38.756 ************************************ 00:04:38.756 START TEST setup.sh 00:04:38.756 ************************************ 00:04:38.756 21:03:50 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:38.756 * Looking for test storage... 00:04:38.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:38.756 21:03:50 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:38.756 21:03:50 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:38.756 21:03:50 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:38.756 21:03:50 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.756 21:03:50 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.756 21:03:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:38.756 ************************************ 00:04:38.756 START TEST acl 00:04:38.756 ************************************ 00:04:38.756 21:03:50 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:39.015 * Looking for test storage... 00:04:39.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:39.015 21:03:50 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:39.015 21:03:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.016 21:03:50 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.016 21:03:50 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:04:39.016 21:03:50 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:04:39.016 21:03:50 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:39.016 21:03:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.016 21:03:50 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.016 21:03:50 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:04:39.016 21:03:50 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:04:39.016 21:03:50 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:39.016 21:03:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.016 21:03:50 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:39.016 21:03:50 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:39.016 21:03:50 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:39.016 21:03:50 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:39.016 21:03:50 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:39.016 21:03:50 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:39.016 21:03:50 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:40.393 21:03:51 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:40.393 21:03:51 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:40.393 21:03:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.393 21:03:51 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:40.393 21:03:51 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.393 21:03:51 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:40.652 21:03:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:40.652 21:03:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:40.652 21:03:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.220 Hugepages 00:04:41.220 node hugesize free / total 00:04:41.220 21:03:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:41.220 21:03:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:41.220 21:03:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.220 00:04:41.220 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:41.220 21:03:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:41.220 21:03:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:41.220 21:03:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.220 21:03:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:41.220 21:03:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:41.220 21:03:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:41.220 21:03:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.220 21:03:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:41.220 21:03:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:41.220 21:03:52 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:41.220 21:03:52 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:41.220 21:03:52 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:41.220 21:03:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.220 21:03:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:41.220 21:03:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:41.220 21:03:52 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:41.220 21:03:52 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:41.220 21:03:52 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:41.220 21:03:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.479 21:03:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:04:41.479 21:03:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:41.479 21:03:52 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:41.479 21:03:52 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:41.479 21:03:52 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:41.479 21:03:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.479 21:03:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:04:41.479 21:03:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:41.479 21:03:52 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:04:41.479 21:03:52 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:41.479 21:03:52 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:41.479 21:03:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.479 21:03:52 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:04:41.479 21:03:52 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:41.479 21:03:52 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.479 21:03:52 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.479 21:03:52 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:41.479 ************************************ 00:04:41.479 START TEST denied 00:04:41.479 ************************************ 00:04:41.479 21:03:52 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:41.479 21:03:52 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:41.479 21:03:52 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:41.479 21:03:52 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:41.479 21:03:52 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.479 21:03:52 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:42.854 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:42.854 21:03:54 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:42.854 21:03:54 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:42.854 21:03:54 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:42.854 21:03:54 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:42.854 21:03:54 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:42.854 21:03:54 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:42.855 21:03:54 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:42.855 21:03:54 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:42.855 21:03:54 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:42.855 21:03:54 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:49.418 00:04:49.418 real 0m7.213s 00:04:49.418 user 0m0.892s 00:04:49.418 sys 0m1.353s 00:04:49.418 21:04:00 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.418 21:04:00 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:49.418 ************************************ 00:04:49.418 END TEST denied 00:04:49.418 ************************************ 00:04:49.418 21:04:00 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:49.418 21:04:00 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:49.418 21:04:00 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.418 21:04:00 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.418 21:04:00 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:49.418 ************************************ 00:04:49.418 START TEST allowed 00:04:49.418 ************************************ 00:04:49.418 21:04:00 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:49.418 21:04:00 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:49.418 21:04:00 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:49.419 21:04:00 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:49.419 21:04:00 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.419 21:04:00 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:49.984 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:49.984 21:04:01 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:49.984 21:04:01 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:49.984 21:04:01 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:49.984 21:04:01 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:49.984 21:04:01 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:49.984 21:04:01 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:49.984 21:04:01 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:49.984 21:04:01 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:49.984 21:04:01 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:04:49.984 21:04:01 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:04:49.984 21:04:01 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:49.984 21:04:01 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:49.984 21:04:01 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:49.984 21:04:01 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:04:49.984 21:04:01 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:04:49.984 21:04:01 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:49.984 21:04:01 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:49.984 21:04:01 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:49.984 21:04:01 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:49.984 21:04:01 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:50.920 00:04:50.920 real 0m2.221s 00:04:50.920 user 0m1.000s 00:04:50.920 sys 0m1.207s 00:04:50.920 21:04:02 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.920 ************************************ 00:04:50.920 END TEST allowed 00:04:50.920 21:04:02 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:50.920 ************************************ 00:04:50.920 21:04:02 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:50.920 ************************************ 00:04:50.920 END TEST acl 00:04:50.920 ************************************ 00:04:50.920 00:04:50.920 real 0m12.163s 00:04:50.920 user 0m3.173s 00:04:50.920 sys 0m4.004s 00:04:50.920 21:04:02 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.920 21:04:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:51.181 21:04:02 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:51.181 21:04:02 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:51.181 21:04:02 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.181 21:04:02 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.181 21:04:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:51.181 ************************************ 00:04:51.181 START TEST hugepages 00:04:51.181 ************************************ 00:04:51.181 21:04:02 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:51.181 * Looking for test storage... 00:04:51.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5826148 kB' 'MemAvailable: 7421724 kB' 'Buffers: 2436 kB' 'Cached: 1808904 kB' 'SwapCached: 0 kB' 'Active: 448100 kB' 'Inactive: 1465168 kB' 'Active(anon): 112440 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 103576 kB' 'Mapped: 51516 kB' 'Shmem: 10512 kB' 'KReclaimable: 63372 kB' 'Slab: 136228 kB' 'SReclaimable: 63372 kB' 'SUnreclaim: 72856 kB' 'KernelStack: 6252 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 326504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.181 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.182 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:51.183 21:04:02 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:51.183 21:04:02 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.183 21:04:02 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.183 21:04:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:51.183 ************************************ 00:04:51.183 START TEST default_setup 00:04:51.183 ************************************ 00:04:51.183 21:04:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:51.183 21:04:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:51.183 21:04:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:51.183 21:04:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:51.183 21:04:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:51.183 21:04:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:51.183 21:04:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:51.183 21:04:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:51.183 21:04:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:51.183 21:04:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:51.183 21:04:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:51.183 21:04:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:51.183 21:04:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:51.183 21:04:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:51.183 21:04:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:51.183 21:04:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:51.183 21:04:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:51.183 21:04:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:51.183 21:04:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:51.183 21:04:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:51.183 21:04:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:51.183 21:04:02 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.183 21:04:02 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:51.749 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:52.314 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:52.314 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:52.314 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:52.577 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:52.577 21:04:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:52.577 21:04:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:52.577 21:04:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.577 21:04:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.577 21:04:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:52.577 21:04:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:52.577 21:04:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:52.577 21:04:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.577 21:04:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.577 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.577 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:52.577 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:52.577 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:52.577 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.577 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.577 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.577 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.577 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.577 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7913312 kB' 'MemAvailable: 9508636 kB' 'Buffers: 2436 kB' 'Cached: 1808888 kB' 'SwapCached: 0 kB' 'Active: 466360 kB' 'Inactive: 1465180 kB' 'Active(anon): 130700 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 121552 kB' 'Mapped: 51660 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135300 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72452 kB' 'KernelStack: 6240 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7917472 kB' 'MemAvailable: 9512804 kB' 'Buffers: 2436 kB' 'Cached: 1808888 kB' 'SwapCached: 0 kB' 'Active: 465964 kB' 'Inactive: 1465188 kB' 'Active(anon): 130304 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465188 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 121204 kB' 'Mapped: 51464 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135296 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72448 kB' 'KernelStack: 6240 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.580 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7916288 kB' 'MemAvailable: 9511620 kB' 'Buffers: 2436 kB' 'Cached: 1808888 kB' 'SwapCached: 0 kB' 'Active: 466004 kB' 'Inactive: 1465188 kB' 'Active(anon): 130344 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465188 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 121496 kB' 'Mapped: 51464 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135296 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72448 kB' 'KernelStack: 6256 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.581 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.581 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.582 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:52.583 nr_hugepages=1024 00:04:52.583 resv_hugepages=0 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.583 surplus_hugepages=0 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.583 anon_hugepages=0 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7916288 kB' 'MemAvailable: 9511620 kB' 'Buffers: 2436 kB' 'Cached: 1808888 kB' 'SwapCached: 0 kB' 'Active: 466060 kB' 'Inactive: 1465188 kB' 'Active(anon): 130400 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465188 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 121524 kB' 'Mapped: 51464 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135292 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72444 kB' 'KernelStack: 6240 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7916288 kB' 'MemUsed: 4325684 kB' 'SwapCached: 0 kB' 'Active: 465724 kB' 'Inactive: 1465188 kB' 'Active(anon): 130064 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465188 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'FilePages: 1811324 kB' 'Mapped: 51464 kB' 'AnonPages: 121448 kB' 'Shmem: 10472 kB' 'KernelStack: 6240 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62848 kB' 'Slab: 135292 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72444 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.586 node0=1024 expecting 1024 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:52.586 00:04:52.586 real 0m1.386s 00:04:52.586 user 0m0.642s 00:04:52.586 sys 0m0.716s 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.586 21:04:04 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:52.586 ************************************ 00:04:52.586 END TEST default_setup 00:04:52.586 ************************************ 00:04:52.586 21:04:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:52.586 21:04:04 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:52.586 21:04:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.586 21:04:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.586 21:04:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:52.844 ************************************ 00:04:52.844 START TEST per_node_1G_alloc 00:04:52.844 ************************************ 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.844 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:53.113 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:53.113 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.113 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.113 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.113 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8959352 kB' 'MemAvailable: 10554680 kB' 'Buffers: 2436 kB' 'Cached: 1808884 kB' 'SwapCached: 0 kB' 'Active: 466584 kB' 'Inactive: 1465184 kB' 'Active(anon): 130924 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465184 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 121944 kB' 'Mapped: 51540 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135392 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72544 kB' 'KernelStack: 6368 kB' 'PageTables: 4536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 348480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.396 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8959352 kB' 'MemAvailable: 10554680 kB' 'Buffers: 2436 kB' 'Cached: 1808884 kB' 'SwapCached: 0 kB' 'Active: 466272 kB' 'Inactive: 1465184 kB' 'Active(anon): 130612 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465184 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 121632 kB' 'Mapped: 51600 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135388 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72540 kB' 'KernelStack: 6256 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 348480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.397 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.398 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8959352 kB' 'MemAvailable: 10554680 kB' 'Buffers: 2436 kB' 'Cached: 1808884 kB' 'SwapCached: 0 kB' 'Active: 466040 kB' 'Inactive: 1465184 kB' 'Active(anon): 130380 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465184 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 121356 kB' 'Mapped: 51600 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135388 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72540 kB' 'KernelStack: 6208 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 348480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.399 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.400 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:53.401 nr_hugepages=512 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:53.401 resv_hugepages=0 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:53.401 surplus_hugepages=0 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:53.401 anon_hugepages=0 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8959352 kB' 'MemAvailable: 10554680 kB' 'Buffers: 2436 kB' 'Cached: 1808884 kB' 'SwapCached: 0 kB' 'Active: 465880 kB' 'Inactive: 1465184 kB' 'Active(anon): 130220 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465184 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 121208 kB' 'Mapped: 51604 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135392 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72544 kB' 'KernelStack: 6216 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 348480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.401 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.402 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8959352 kB' 'MemUsed: 3282620 kB' 'SwapCached: 0 kB' 'Active: 465700 kB' 'Inactive: 1465184 kB' 'Active(anon): 130040 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465184 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'FilePages: 1811320 kB' 'Mapped: 51604 kB' 'AnonPages: 121032 kB' 'Shmem: 10472 kB' 'KernelStack: 6232 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62848 kB' 'Slab: 135392 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72544 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.403 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:53.404 node0=512 expecting 512 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:53.404 00:04:53.404 real 0m0.686s 00:04:53.404 user 0m0.320s 00:04:53.404 sys 0m0.412s 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.404 21:04:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:53.404 ************************************ 00:04:53.404 END TEST per_node_1G_alloc 00:04:53.404 ************************************ 00:04:53.404 21:04:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:53.404 21:04:04 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:53.404 21:04:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.404 21:04:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.404 21:04:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:53.404 ************************************ 00:04:53.404 START TEST even_2G_alloc 00:04:53.404 ************************************ 00:04:53.404 21:04:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:53.404 21:04:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:53.404 21:04:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:53.404 21:04:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:53.404 21:04:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:53.404 21:04:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:53.404 21:04:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:53.404 21:04:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:53.404 21:04:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:53.404 21:04:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:53.404 21:04:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:53.404 21:04:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:53.405 21:04:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:53.405 21:04:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:53.405 21:04:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:53.405 21:04:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:53.405 21:04:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:53.405 21:04:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:53.405 21:04:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:53.405 21:04:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:53.405 21:04:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:53.405 21:04:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:53.405 21:04:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:53.405 21:04:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.405 21:04:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:53.663 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:53.925 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.925 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.925 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.925 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7907356 kB' 'MemAvailable: 9502696 kB' 'Buffers: 2436 kB' 'Cached: 1808892 kB' 'SwapCached: 0 kB' 'Active: 466112 kB' 'Inactive: 1465196 kB' 'Active(anon): 130452 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 121812 kB' 'Mapped: 51612 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135384 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72536 kB' 'KernelStack: 6232 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.925 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.926 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7907496 kB' 'MemAvailable: 9502836 kB' 'Buffers: 2436 kB' 'Cached: 1808892 kB' 'SwapCached: 0 kB' 'Active: 465932 kB' 'Inactive: 1465196 kB' 'Active(anon): 130272 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 121420 kB' 'Mapped: 51612 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135368 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72520 kB' 'KernelStack: 6264 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.927 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7907496 kB' 'MemAvailable: 9502836 kB' 'Buffers: 2436 kB' 'Cached: 1808892 kB' 'SwapCached: 0 kB' 'Active: 465824 kB' 'Inactive: 1465196 kB' 'Active(anon): 130164 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 121536 kB' 'Mapped: 51460 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135376 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72528 kB' 'KernelStack: 6256 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.928 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.929 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.190 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.190 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.190 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.190 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.190 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.190 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.190 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.190 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.190 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.190 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.190 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.190 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:54.191 nr_hugepages=1024 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:54.191 resv_hugepages=0 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.191 surplus_hugepages=0 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.191 anon_hugepages=0 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7907496 kB' 'MemAvailable: 9502836 kB' 'Buffers: 2436 kB' 'Cached: 1808892 kB' 'SwapCached: 0 kB' 'Active: 465756 kB' 'Inactive: 1465196 kB' 'Active(anon): 130096 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 121456 kB' 'Mapped: 51460 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135376 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72528 kB' 'KernelStack: 6224 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.191 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.192 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7907496 kB' 'MemUsed: 4334476 kB' 'SwapCached: 0 kB' 'Active: 465744 kB' 'Inactive: 1465196 kB' 'Active(anon): 130084 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'FilePages: 1811328 kB' 'Mapped: 51460 kB' 'AnonPages: 121464 kB' 'Shmem: 10472 kB' 'KernelStack: 6224 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62848 kB' 'Slab: 135376 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72528 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.193 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.194 node0=1024 expecting 1024 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:54.194 00:04:54.194 real 0m0.678s 00:04:54.194 user 0m0.327s 00:04:54.194 sys 0m0.399s 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.194 21:04:05 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:54.194 ************************************ 00:04:54.194 END TEST even_2G_alloc 00:04:54.194 ************************************ 00:04:54.194 21:04:05 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:54.194 21:04:05 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:54.194 21:04:05 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.194 21:04:05 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.194 21:04:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:54.194 ************************************ 00:04:54.194 START TEST odd_alloc 00:04:54.194 ************************************ 00:04:54.194 21:04:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:54.194 21:04:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:54.194 21:04:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:54.194 21:04:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:54.194 21:04:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:54.194 21:04:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:54.194 21:04:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:54.194 21:04:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:54.194 21:04:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.195 21:04:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:54.195 21:04:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:54.195 21:04:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.195 21:04:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.195 21:04:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:54.195 21:04:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:54.195 21:04:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.195 21:04:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:54.195 21:04:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:54.195 21:04:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:54.195 21:04:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.195 21:04:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:54.195 21:04:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:54.195 21:04:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:54.195 21:04:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.195 21:04:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:54.453 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:54.716 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:54.716 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:54.716 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:54.716 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:54.716 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:54.716 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:54.716 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:54.716 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:54.716 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:54.716 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:54.716 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:54.716 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:54.716 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:54.716 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:54.716 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:54.716 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:54.716 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.716 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.716 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.716 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.716 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.716 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.716 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7899496 kB' 'MemAvailable: 9494836 kB' 'Buffers: 2436 kB' 'Cached: 1808892 kB' 'SwapCached: 0 kB' 'Active: 466404 kB' 'Inactive: 1465196 kB' 'Active(anon): 130744 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 121852 kB' 'Mapped: 51640 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135352 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72504 kB' 'KernelStack: 6288 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 348480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.717 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7899564 kB' 'MemAvailable: 9494904 kB' 'Buffers: 2436 kB' 'Cached: 1808892 kB' 'SwapCached: 0 kB' 'Active: 466068 kB' 'Inactive: 1465196 kB' 'Active(anon): 130408 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121824 kB' 'Mapped: 51588 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135364 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72516 kB' 'KernelStack: 6240 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 348728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.718 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.719 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7899564 kB' 'MemAvailable: 9494904 kB' 'Buffers: 2436 kB' 'Cached: 1808892 kB' 'SwapCached: 0 kB' 'Active: 466252 kB' 'Inactive: 1465196 kB' 'Active(anon): 130592 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121752 kB' 'Mapped: 51648 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135364 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72516 kB' 'KernelStack: 6224 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 348480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.720 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:54.721 nr_hugepages=1025 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:54.721 resv_hugepages=0 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.721 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.721 surplus_hugepages=0 00:04:54.721 anon_hugepages=0 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7899904 kB' 'MemAvailable: 9495244 kB' 'Buffers: 2436 kB' 'Cached: 1808892 kB' 'SwapCached: 0 kB' 'Active: 466060 kB' 'Inactive: 1465196 kB' 'Active(anon): 130400 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121496 kB' 'Mapped: 51460 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135356 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72508 kB' 'KernelStack: 6192 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 348480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.722 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7899904 kB' 'MemUsed: 4342068 kB' 'SwapCached: 0 kB' 'Active: 465728 kB' 'Inactive: 1465196 kB' 'Active(anon): 130068 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1811328 kB' 'Mapped: 51460 kB' 'AnonPages: 121452 kB' 'Shmem: 10472 kB' 'KernelStack: 6240 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62848 kB' 'Slab: 135360 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72512 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.723 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.724 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.724 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.724 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.724 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.724 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.724 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.724 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.724 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.724 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.724 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.724 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.724 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.724 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.724 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.984 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.984 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.984 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.984 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.984 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.984 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.985 node0=1025 expecting 1025 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:54.985 00:04:54.985 real 0m0.680s 00:04:54.985 user 0m0.318s 00:04:54.985 sys 0m0.406s 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.985 ************************************ 00:04:54.985 END TEST odd_alloc 00:04:54.985 ************************************ 00:04:54.985 21:04:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:54.985 21:04:06 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:54.985 21:04:06 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:54.985 21:04:06 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.985 21:04:06 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.985 21:04:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:54.985 ************************************ 00:04:54.985 START TEST custom_alloc 00:04:54.985 ************************************ 00:04:54.985 21:04:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:54.985 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:54.985 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:54.985 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:54.985 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:54.985 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:54.985 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:54.985 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:54.985 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:54.985 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:54.985 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:54.985 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:54.985 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:54.985 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.985 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:54.985 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:54.985 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.985 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.985 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:54.985 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:54.985 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.986 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:54.986 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:54.986 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:54.986 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.986 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:54.986 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:54.986 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:54.986 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:54.986 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:54.986 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:54.986 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:54.986 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.986 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:54.986 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:54.986 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.986 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.986 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:54.986 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:54.986 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:54.986 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:54.986 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:54.986 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:54.986 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:54.986 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.986 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:55.245 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:55.510 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:55.510 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:55.510 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:55.510 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8951532 kB' 'MemAvailable: 10546872 kB' 'Buffers: 2436 kB' 'Cached: 1808892 kB' 'SwapCached: 0 kB' 'Active: 466340 kB' 'Inactive: 1465196 kB' 'Active(anon): 130680 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121812 kB' 'Mapped: 51556 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135360 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72512 kB' 'KernelStack: 6256 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 348608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.510 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8951532 kB' 'MemAvailable: 10546872 kB' 'Buffers: 2436 kB' 'Cached: 1808892 kB' 'SwapCached: 0 kB' 'Active: 465852 kB' 'Inactive: 1465196 kB' 'Active(anon): 130192 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121528 kB' 'Mapped: 51556 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135360 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72512 kB' 'KernelStack: 6208 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 348608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.512 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8951532 kB' 'MemAvailable: 10546872 kB' 'Buffers: 2436 kB' 'Cached: 1808892 kB' 'SwapCached: 0 kB' 'Active: 465812 kB' 'Inactive: 1465196 kB' 'Active(anon): 130152 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121480 kB' 'Mapped: 51460 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135364 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72516 kB' 'KernelStack: 6208 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 348608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.513 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:55.515 nr_hugepages=512 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:55.515 resv_hugepages=0 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.515 surplus_hugepages=0 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.515 anon_hugepages=0 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8951532 kB' 'MemAvailable: 10546872 kB' 'Buffers: 2436 kB' 'Cached: 1808892 kB' 'SwapCached: 0 kB' 'Active: 465832 kB' 'Inactive: 1465196 kB' 'Active(anon): 130172 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121320 kB' 'Mapped: 51460 kB' 'Shmem: 10472 kB' 'KReclaimable: 62848 kB' 'Slab: 135364 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72516 kB' 'KernelStack: 6256 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 348608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.516 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8951532 kB' 'MemUsed: 3290440 kB' 'SwapCached: 0 kB' 'Active: 465832 kB' 'Inactive: 1465196 kB' 'Active(anon): 130172 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1811328 kB' 'Mapped: 51460 kB' 'AnonPages: 121580 kB' 'Shmem: 10472 kB' 'KernelStack: 6256 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62848 kB' 'Slab: 135364 kB' 'SReclaimable: 62848 kB' 'SUnreclaim: 72516 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.517 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.518 node0=512 expecting 512 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:55.518 00:04:55.518 real 0m0.688s 00:04:55.518 user 0m0.323s 00:04:55.518 sys 0m0.413s 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.518 21:04:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:55.518 ************************************ 00:04:55.518 END TEST custom_alloc 00:04:55.518 ************************************ 00:04:55.778 21:04:07 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:55.778 21:04:07 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:55.778 21:04:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.778 21:04:07 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.778 21:04:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:55.778 ************************************ 00:04:55.778 START TEST no_shrink_alloc 00:04:55.778 ************************************ 00:04:55.778 21:04:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:55.778 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:55.778 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:55.778 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:55.778 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:55.778 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:55.778 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:55.778 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.778 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:55.778 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:55.778 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:55.778 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.778 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:55.778 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:55.778 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.778 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.778 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:55.778 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:55.778 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:55.778 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:55.778 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:55.778 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.778 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:56.037 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:56.037 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.037 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.037 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.037 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7906132 kB' 'MemAvailable: 9501464 kB' 'Buffers: 2436 kB' 'Cached: 1808892 kB' 'SwapCached: 0 kB' 'Active: 463236 kB' 'Inactive: 1465196 kB' 'Active(anon): 127576 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118416 kB' 'Mapped: 50852 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135184 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72352 kB' 'KernelStack: 6184 kB' 'PageTables: 3640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7906236 kB' 'MemAvailable: 9501568 kB' 'Buffers: 2436 kB' 'Cached: 1808892 kB' 'SwapCached: 0 kB' 'Active: 462912 kB' 'Inactive: 1465196 kB' 'Active(anon): 127252 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118408 kB' 'Mapped: 50724 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135172 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72340 kB' 'KernelStack: 6176 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7906596 kB' 'MemAvailable: 9501928 kB' 'Buffers: 2436 kB' 'Cached: 1808892 kB' 'SwapCached: 0 kB' 'Active: 462900 kB' 'Inactive: 1465196 kB' 'Active(anon): 127240 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118372 kB' 'Mapped: 50724 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135172 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72340 kB' 'KernelStack: 6160 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:56.307 nr_hugepages=1024 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.307 resv_hugepages=0 00:04:56.307 surplus_hugepages=0 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.307 anon_hugepages=0 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7906596 kB' 'MemAvailable: 9501928 kB' 'Buffers: 2436 kB' 'Cached: 1808892 kB' 'SwapCached: 0 kB' 'Active: 462600 kB' 'Inactive: 1465196 kB' 'Active(anon): 126940 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118124 kB' 'Mapped: 50724 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135172 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72340 kB' 'KernelStack: 6144 kB' 'PageTables: 3680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 338080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7906596 kB' 'MemUsed: 4335376 kB' 'SwapCached: 0 kB' 'Active: 462464 kB' 'Inactive: 1465196 kB' 'Active(anon): 126804 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1811328 kB' 'Mapped: 50724 kB' 'AnonPages: 117968 kB' 'Shmem: 10472 kB' 'KernelStack: 6164 kB' 'PageTables: 3528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62832 kB' 'Slab: 135164 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72332 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.310 node0=1024 expecting 1024 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.310 21:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:56.883 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:56.883 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.883 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.883 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.883 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.883 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7907924 kB' 'MemAvailable: 9503260 kB' 'Buffers: 2436 kB' 'Cached: 1808896 kB' 'SwapCached: 0 kB' 'Active: 463576 kB' 'Inactive: 1465200 kB' 'Active(anon): 127916 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 119004 kB' 'Mapped: 50948 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135176 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72344 kB' 'KernelStack: 6164 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.884 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7907924 kB' 'MemAvailable: 9503260 kB' 'Buffers: 2436 kB' 'Cached: 1808896 kB' 'SwapCached: 0 kB' 'Active: 462948 kB' 'Inactive: 1465200 kB' 'Active(anon): 127288 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118376 kB' 'Mapped: 50900 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135180 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72348 kB' 'KernelStack: 6100 kB' 'PageTables: 3636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.885 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.886 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7907924 kB' 'MemAvailable: 9503260 kB' 'Buffers: 2436 kB' 'Cached: 1808896 kB' 'SwapCached: 0 kB' 'Active: 462888 kB' 'Inactive: 1465200 kB' 'Active(anon): 127228 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118316 kB' 'Mapped: 50780 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135152 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72320 kB' 'KernelStack: 6128 kB' 'PageTables: 3628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.887 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.888 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:56.889 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:56.889 nr_hugepages=1024 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:56.890 resv_hugepages=0 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.890 surplus_hugepages=0 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.890 anon_hugepages=0 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7907924 kB' 'MemAvailable: 9503260 kB' 'Buffers: 2436 kB' 'Cached: 1808896 kB' 'SwapCached: 0 kB' 'Active: 462888 kB' 'Inactive: 1465200 kB' 'Active(anon): 127228 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118316 kB' 'Mapped: 50780 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135152 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72320 kB' 'KernelStack: 6196 kB' 'PageTables: 3628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.890 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.891 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7907924 kB' 'MemUsed: 4334048 kB' 'SwapCached: 0 kB' 'Active: 462888 kB' 'Inactive: 1465200 kB' 'Active(anon): 127228 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1465200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1811332 kB' 'Mapped: 50780 kB' 'AnonPages: 118576 kB' 'Shmem: 10472 kB' 'KernelStack: 6196 kB' 'PageTables: 3888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62832 kB' 'Slab: 135152 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72320 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.892 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.893 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.893 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.893 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.893 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.893 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.893 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.893 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.893 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.893 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.893 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:57.152 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:57.152 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:57.152 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:57.152 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:57.152 node0=1024 expecting 1024 00:04:57.152 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:57.152 21:04:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:57.152 00:04:57.152 real 0m1.354s 00:04:57.152 user 0m0.585s 00:04:57.152 sys 0m0.836s 00:04:57.152 21:04:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.152 21:04:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:57.152 ************************************ 00:04:57.152 END TEST no_shrink_alloc 00:04:57.152 ************************************ 00:04:57.152 21:04:08 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:57.152 21:04:08 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:57.152 21:04:08 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:57.152 21:04:08 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:57.152 21:04:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:57.152 21:04:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:57.153 21:04:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:57.153 21:04:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:57.153 21:04:08 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:57.153 21:04:08 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:57.153 00:04:57.153 real 0m5.975s 00:04:57.153 user 0m2.680s 00:04:57.153 sys 0m3.464s 00:04:57.153 21:04:08 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.153 21:04:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:57.153 ************************************ 00:04:57.153 END TEST hugepages 00:04:57.153 ************************************ 00:04:57.153 21:04:08 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:57.153 21:04:08 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:57.153 21:04:08 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.153 21:04:08 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.153 21:04:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:57.153 ************************************ 00:04:57.153 START TEST driver 00:04:57.153 ************************************ 00:04:57.153 21:04:08 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:57.153 * Looking for test storage... 00:04:57.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:57.153 21:04:08 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:57.153 21:04:08 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:57.153 21:04:08 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:03.716 21:04:14 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:03.716 21:04:14 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.716 21:04:14 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.716 21:04:14 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:03.716 ************************************ 00:05:03.716 START TEST guess_driver 00:05:03.716 ************************************ 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:03.716 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:03.716 Looking for driver=uio_pci_generic 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.716 21:04:14 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:03.716 21:04:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:03.716 21:04:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:03.716 21:04:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.284 21:04:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.284 21:04:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:04.284 21:04:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.284 21:04:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.284 21:04:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:04.284 21:04:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.284 21:04:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.284 21:04:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:04.284 21:04:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.284 21:04:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.284 21:04:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:04.284 21:04:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.284 21:04:15 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:04.284 21:04:15 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:04.284 21:04:15 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:04.284 21:04:15 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:10.861 00:05:10.861 real 0m7.169s 00:05:10.861 user 0m0.814s 00:05:10.861 sys 0m1.423s 00:05:10.861 21:04:21 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.861 21:04:21 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:10.861 ************************************ 00:05:10.861 END TEST guess_driver 00:05:10.861 ************************************ 00:05:10.861 21:04:21 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:10.861 00:05:10.861 real 0m13.226s 00:05:10.861 user 0m1.177s 00:05:10.861 sys 0m2.204s 00:05:10.861 21:04:21 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.861 21:04:21 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:10.861 ************************************ 00:05:10.861 END TEST driver 00:05:10.861 ************************************ 00:05:10.861 21:04:21 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:10.861 21:04:21 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:10.861 21:04:21 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.861 21:04:21 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.861 21:04:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:10.862 ************************************ 00:05:10.862 START TEST devices 00:05:10.862 ************************************ 00:05:10.862 21:04:21 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:10.862 * Looking for test storage... 00:05:10.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:10.862 21:04:21 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:10.862 21:04:21 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:10.862 21:04:21 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:10.862 21:04:21 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:11.798 21:04:23 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:11.798 21:04:23 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:11.798 21:04:23 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:11.798 21:04:23 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:11.798 21:04:23 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:11.798 21:04:23 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:11.798 21:04:23 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:11.798 21:04:23 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:11.798 21:04:23 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:11.798 21:04:23 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:11.798 21:04:23 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:11.798 21:04:23 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:11.798 21:04:23 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:11.798 21:04:23 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:11.798 21:04:23 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:11.798 21:04:23 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:05:11.798 21:04:23 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:05:11.798 21:04:23 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:11.798 21:04:23 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:11.798 21:04:23 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:11.799 21:04:23 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:05:11.799 21:04:23 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:05:11.799 21:04:23 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:11.799 21:04:23 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:11.799 21:04:23 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:11.799 21:04:23 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:05:11.799 21:04:23 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:05:11.799 21:04:23 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:11.799 21:04:23 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:11.799 21:04:23 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:11.799 21:04:23 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:05:11.799 21:04:23 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:05:11.799 21:04:23 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:11.799 21:04:23 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:11.799 21:04:23 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:11.799 21:04:23 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:05:11.799 21:04:23 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:05:11.799 21:04:23 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:11.799 21:04:23 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:11.799 21:04:23 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:11.799 21:04:23 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:11.799 No valid GPT data, bailing 00:05:11.799 21:04:23 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:11.799 21:04:23 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:11.799 21:04:23 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:11.799 21:04:23 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:11.799 21:04:23 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:11.799 21:04:23 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:11.799 21:04:23 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:11.799 21:04:23 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:11.799 No valid GPT data, bailing 00:05:11.799 21:04:23 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:11.799 21:04:23 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:11.799 21:04:23 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:11.799 21:04:23 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:11.799 21:04:23 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:11.799 21:04:23 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:05:11.799 21:04:23 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:05:11.799 21:04:23 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:05:11.799 No valid GPT data, bailing 00:05:11.799 21:04:23 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:11.799 21:04:23 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:11.799 21:04:23 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:05:11.799 21:04:23 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:05:11.799 21:04:23 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:05:11.799 21:04:23 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:05:11.799 21:04:23 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:05:11.799 21:04:23 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:05:11.799 No valid GPT data, bailing 00:05:11.799 21:04:23 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:11.799 21:04:23 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:11.799 21:04:23 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:05:11.799 21:04:23 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:05:11.799 21:04:23 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:05:11.799 21:04:23 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:11.799 21:04:23 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:05:11.799 21:04:23 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:05:11.799 21:04:23 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:05:12.058 No valid GPT data, bailing 00:05:12.058 21:04:23 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:12.058 21:04:23 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:12.058 21:04:23 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:12.058 21:04:23 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:05:12.058 21:04:23 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:05:12.058 21:04:23 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:05:12.058 21:04:23 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:12.058 21:04:23 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:12.058 21:04:23 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:12.058 21:04:23 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:12.058 21:04:23 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:12.058 21:04:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:05:12.058 21:04:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:05:12.058 21:04:23 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:05:12.058 21:04:23 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:05:12.058 21:04:23 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:05:12.058 21:04:23 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:05:12.058 21:04:23 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:05:12.058 No valid GPT data, bailing 00:05:12.058 21:04:23 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:12.058 21:04:23 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:12.058 21:04:23 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:12.058 21:04:23 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:05:12.058 21:04:23 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:05:12.058 21:04:23 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:05:12.058 21:04:23 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:05:12.058 21:04:23 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:05:12.058 21:04:23 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:05:12.058 21:04:23 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:12.058 21:04:23 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:12.058 21:04:23 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.058 21:04:23 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.058 21:04:23 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:12.058 ************************************ 00:05:12.058 START TEST nvme_mount 00:05:12.058 ************************************ 00:05:12.058 21:04:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:12.058 21:04:23 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:12.058 21:04:23 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:12.058 21:04:23 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:12.058 21:04:23 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:12.058 21:04:23 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:12.058 21:04:23 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:12.058 21:04:23 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:12.058 21:04:23 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:12.058 21:04:23 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:12.058 21:04:23 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:12.058 21:04:23 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:12.058 21:04:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:12.058 21:04:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:12.058 21:04:23 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:12.058 21:04:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:12.058 21:04:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:12.058 21:04:23 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:12.058 21:04:23 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:12.058 21:04:23 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:12.991 Creating new GPT entries in memory. 00:05:12.991 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:12.991 other utilities. 00:05:12.991 21:04:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:12.991 21:04:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:12.991 21:04:24 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:12.991 21:04:24 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:12.991 21:04:24 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:14.363 Creating new GPT entries in memory. 00:05:14.363 The operation has completed successfully. 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 59390 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:14.363 21:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.622 21:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:14.622 21:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.622 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:14.622 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.622 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:14.622 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.879 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:14.879 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.137 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.137 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:15.137 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.137 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:15.137 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:15.137 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:15.137 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.137 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.137 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:15.137 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:15.137 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:15.137 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:15.137 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:15.395 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:15.396 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:15.396 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:15.396 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:15.396 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:15.396 21:04:26 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:15.396 21:04:26 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.396 21:04:26 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:15.396 21:04:26 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:15.396 21:04:26 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.396 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:15.396 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:15.396 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:15.396 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.396 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:15.396 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:15.396 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:15.396 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:15.396 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:15.396 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.396 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:15.396 21:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:15.396 21:04:26 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.396 21:04:26 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:15.654 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:15.654 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:15.654 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:15.654 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.654 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:15.654 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.913 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:15.913 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.913 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:15.913 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.913 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:15.913 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.171 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:16.171 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.430 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:16.430 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:16.430 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.430 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:16.430 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:16.430 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.430 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:16.430 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:16.430 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:16.430 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:16.430 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:16.430 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:16.430 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:16.430 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:16.430 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.430 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:16.430 21:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:16.430 21:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.430 21:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:16.688 21:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:16.688 21:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:16.688 21:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:16.688 21:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.688 21:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:16.688 21:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.688 21:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:16.688 21:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.945 21:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:16.945 21:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.945 21:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:16.945 21:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.202 21:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:17.202 21:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.459 21:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:17.459 21:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:17.459 21:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:17.459 21:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:17.459 21:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:17.459 21:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:17.459 21:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:17.459 21:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:17.459 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:17.459 00:05:17.459 real 0m5.336s 00:05:17.459 user 0m1.465s 00:05:17.459 sys 0m1.549s 00:05:17.459 21:04:28 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.459 21:04:28 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:17.459 ************************************ 00:05:17.459 END TEST nvme_mount 00:05:17.459 ************************************ 00:05:17.459 21:04:28 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:17.459 21:04:28 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:17.459 21:04:28 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.459 21:04:28 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.459 21:04:28 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:17.459 ************************************ 00:05:17.459 START TEST dm_mount 00:05:17.459 ************************************ 00:05:17.459 21:04:28 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:17.459 21:04:28 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:17.459 21:04:28 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:17.459 21:04:28 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:17.459 21:04:28 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:17.459 21:04:28 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:17.459 21:04:28 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:17.459 21:04:28 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:17.459 21:04:28 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:17.459 21:04:28 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:17.459 21:04:28 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:17.459 21:04:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:17.459 21:04:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.459 21:04:28 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:17.459 21:04:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:17.459 21:04:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.459 21:04:28 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:17.459 21:04:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:17.459 21:04:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.459 21:04:28 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:17.459 21:04:28 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:17.459 21:04:28 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:18.390 Creating new GPT entries in memory. 00:05:18.390 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:18.390 other utilities. 00:05:18.390 21:04:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:18.390 21:04:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:18.390 21:04:29 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:18.390 21:04:29 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:18.390 21:04:29 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:19.758 Creating new GPT entries in memory. 00:05:19.758 The operation has completed successfully. 00:05:19.758 21:04:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:19.758 21:04:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.758 21:04:30 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:19.758 21:04:30 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:19.758 21:04:30 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:20.690 The operation has completed successfully. 00:05:20.690 21:04:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:20.690 21:04:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:20.690 21:04:31 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 60021 00:05:20.690 21:04:31 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:20.690 21:04:31 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.690 21:04:31 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:20.690 21:04:31 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:20.690 21:04:31 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:20.690 21:04:31 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.690 21:04:31 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:20.690 21:04:31 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.690 21:04:31 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:20.690 21:04:31 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:20.690 21:04:31 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:20.690 21:04:31 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:20.690 21:04:31 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:20.690 21:04:31 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.690 21:04:31 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:20.690 21:04:31 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.690 21:04:31 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.690 21:04:31 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:20.690 21:04:31 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.690 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:20.690 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:20.690 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:20.690 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.690 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:20.690 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:20.690 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:20.690 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:20.690 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:20.690 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.690 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:20.690 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:20.690 21:04:32 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.690 21:04:32 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:20.690 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:20.690 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:20.690 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:20.947 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.947 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:20.947 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.947 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:20.947 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.947 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:20.947 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.947 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:20.947 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.512 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:21.512 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.512 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:21.512 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:21.512 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.512 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:21.512 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:21.512 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.512 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:21.512 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:21.512 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:21.512 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:21.512 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:21.512 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:21.512 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:21.512 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:21.512 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.512 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:21.512 21:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:21.512 21:04:32 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.512 21:04:32 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:21.770 21:04:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:21.770 21:04:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:21.770 21:04:33 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:21.770 21:04:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.770 21:04:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:21.770 21:04:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.028 21:04:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:22.028 21:04:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.028 21:04:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:22.028 21:04:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.028 21:04:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:22.028 21:04:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.286 21:04:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:22.286 21:04:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.544 21:04:33 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:22.544 21:04:33 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:22.544 21:04:33 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:22.544 21:04:33 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:22.544 21:04:33 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:22.544 21:04:33 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:22.544 21:04:33 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:22.544 21:04:33 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.544 21:04:33 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:22.544 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:22.544 21:04:33 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:22.544 21:04:33 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:22.544 00:05:22.544 real 0m5.133s 00:05:22.544 user 0m0.959s 00:05:22.544 sys 0m1.097s 00:05:22.544 21:04:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.544 21:04:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:22.544 ************************************ 00:05:22.544 END TEST dm_mount 00:05:22.544 ************************************ 00:05:22.544 21:04:34 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:22.544 21:04:34 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:22.544 21:04:34 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:22.544 21:04:34 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.544 21:04:34 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.544 21:04:34 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:22.544 21:04:34 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:22.544 21:04:34 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:22.802 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:22.802 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:22.802 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:22.802 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:22.802 21:04:34 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:22.803 21:04:34 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:22.803 21:04:34 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:22.803 21:04:34 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.803 21:04:34 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:22.803 21:04:34 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:22.803 21:04:34 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:22.803 ************************************ 00:05:22.803 END TEST devices 00:05:22.803 ************************************ 00:05:22.803 00:05:22.803 real 0m12.530s 00:05:22.803 user 0m3.333s 00:05:22.803 sys 0m3.477s 00:05:22.803 21:04:34 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.803 21:04:34 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:23.061 21:04:34 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:23.061 00:05:23.061 real 0m44.185s 00:05:23.061 user 0m10.472s 00:05:23.061 sys 0m13.316s 00:05:23.061 ************************************ 00:05:23.061 END TEST setup.sh 00:05:23.061 ************************************ 00:05:23.061 21:04:34 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.061 21:04:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:23.061 21:04:34 -- common/autotest_common.sh@1142 -- # return 0 00:05:23.061 21:04:34 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:23.625 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.883 Hugepages 00:05:23.883 node hugesize free / total 00:05:23.883 node0 1048576kB 0 / 0 00:05:23.883 node0 2048kB 2048 / 2048 00:05:23.883 00:05:23.883 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:24.143 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:24.143 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:24.143 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:24.143 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:24.425 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:24.425 21:04:35 -- spdk/autotest.sh@130 -- # uname -s 00:05:24.425 21:04:35 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:24.425 21:04:35 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:24.425 21:04:35 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:24.707 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.272 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:25.272 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:25.272 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:25.530 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:25.530 21:04:36 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:26.465 21:04:37 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:26.465 21:04:37 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:26.465 21:04:37 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:26.465 21:04:37 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:26.465 21:04:37 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:26.465 21:04:37 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:26.465 21:04:37 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:26.465 21:04:37 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:26.465 21:04:37 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:26.723 21:04:38 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:05:26.723 21:04:38 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:26.723 21:04:38 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:26.980 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.238 Waiting for block devices as requested 00:05:27.238 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:27.238 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:27.238 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:27.495 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:32.772 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:32.772 21:04:43 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:32.772 21:04:43 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:32.772 21:04:43 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:05:32.772 21:04:43 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:32.772 21:04:43 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:32.772 21:04:43 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:32.772 21:04:43 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:32.772 21:04:43 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:05:32.772 21:04:43 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:05:32.772 21:04:43 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:05:32.772 21:04:43 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:05:32.772 21:04:43 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:32.772 21:04:43 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:32.772 21:04:43 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:32.772 21:04:43 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:32.772 21:04:43 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:32.772 21:04:43 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:05:32.772 21:04:43 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:32.772 21:04:43 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:32.772 21:04:43 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:32.772 21:04:43 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:32.772 21:04:43 -- common/autotest_common.sh@1557 -- # continue 00:05:32.772 21:04:43 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:32.772 21:04:43 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:32.772 21:04:43 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:05:32.772 21:04:43 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:32.772 21:04:43 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:32.772 21:04:43 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:32.772 21:04:43 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:32.772 21:04:43 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:32.772 21:04:43 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:32.772 21:04:43 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:32.772 21:04:43 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:32.772 21:04:43 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:32.772 21:04:43 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:32.772 21:04:43 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:32.772 21:04:43 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:32.772 21:04:43 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:32.772 21:04:43 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:32.772 21:04:43 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:32.772 21:04:43 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:32.772 21:04:43 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:32.772 21:04:43 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:32.772 21:04:43 -- common/autotest_common.sh@1557 -- # continue 00:05:32.772 21:04:43 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:32.772 21:04:43 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:32.772 21:04:43 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:32.772 21:04:43 -- common/autotest_common.sh@1502 -- # grep 0000:00:12.0/nvme/nvme 00:05:32.772 21:04:43 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:32.772 21:04:43 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:32.772 21:04:43 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:32.772 21:04:43 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:05:32.772 21:04:43 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:05:32.772 21:04:43 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:05:32.772 21:04:43 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:32.772 21:04:43 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:05:32.772 21:04:43 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:32.772 21:04:44 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:32.772 21:04:44 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:32.772 21:04:44 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:32.772 21:04:44 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme2 00:05:32.772 21:04:44 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:32.772 21:04:44 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:32.772 21:04:44 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:32.772 21:04:44 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:32.772 21:04:44 -- common/autotest_common.sh@1557 -- # continue 00:05:32.772 21:04:44 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:32.772 21:04:44 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:32.772 21:04:44 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:32.772 21:04:44 -- common/autotest_common.sh@1502 -- # grep 0000:00:13.0/nvme/nvme 00:05:32.772 21:04:44 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:32.772 21:04:44 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:32.772 21:04:44 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:32.772 21:04:44 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:05:32.772 21:04:44 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:05:32.772 21:04:44 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:05:32.772 21:04:44 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:05:32.772 21:04:44 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:32.772 21:04:44 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:32.772 21:04:44 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:32.772 21:04:44 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:32.772 21:04:44 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:32.772 21:04:44 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme3 00:05:32.772 21:04:44 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:32.772 21:04:44 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:32.772 21:04:44 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:32.772 21:04:44 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:32.772 21:04:44 -- common/autotest_common.sh@1557 -- # continue 00:05:32.772 21:04:44 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:32.772 21:04:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:32.772 21:04:44 -- common/autotest_common.sh@10 -- # set +x 00:05:32.772 21:04:44 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:32.772 21:04:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.772 21:04:44 -- common/autotest_common.sh@10 -- # set +x 00:05:32.772 21:04:44 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:33.339 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:33.905 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:33.905 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:33.905 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:33.905 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:33.905 21:04:45 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:33.905 21:04:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.905 21:04:45 -- common/autotest_common.sh@10 -- # set +x 00:05:33.905 21:04:45 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:33.905 21:04:45 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:33.905 21:04:45 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:33.905 21:04:45 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:33.905 21:04:45 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:33.905 21:04:45 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:33.905 21:04:45 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:33.905 21:04:45 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:33.905 21:04:45 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:33.905 21:04:45 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:33.905 21:04:45 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:33.905 21:04:45 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:05:33.905 21:04:45 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:33.905 21:04:45 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:33.905 21:04:45 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:33.905 21:04:45 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:33.905 21:04:45 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:33.905 21:04:45 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:33.905 21:04:45 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:33.905 21:04:45 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:33.905 21:04:45 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:33.905 21:04:45 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:33.905 21:04:45 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:33.905 21:04:45 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:33.905 21:04:45 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:33.905 21:04:45 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:33.905 21:04:45 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:34.164 21:04:45 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:34.164 21:04:45 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:34.164 21:04:45 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:34.164 21:04:45 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:34.164 21:04:45 -- common/autotest_common.sh@1593 -- # return 0 00:05:34.164 21:04:45 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:34.164 21:04:45 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:34.164 21:04:45 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:34.164 21:04:45 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:34.164 21:04:45 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:34.164 21:04:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:34.164 21:04:45 -- common/autotest_common.sh@10 -- # set +x 00:05:34.164 21:04:45 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:34.164 21:04:45 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:34.164 21:04:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.164 21:04:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.164 21:04:45 -- common/autotest_common.sh@10 -- # set +x 00:05:34.164 ************************************ 00:05:34.164 START TEST env 00:05:34.164 ************************************ 00:05:34.164 21:04:45 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:34.164 * Looking for test storage... 00:05:34.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:34.164 21:04:45 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:34.164 21:04:45 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.164 21:04:45 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.164 21:04:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.164 ************************************ 00:05:34.164 START TEST env_memory 00:05:34.164 ************************************ 00:05:34.164 21:04:45 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:34.164 00:05:34.164 00:05:34.164 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.164 http://cunit.sourceforge.net/ 00:05:34.164 00:05:34.164 00:05:34.164 Suite: memory 00:05:34.164 Test: alloc and free memory map ...[2024-07-14 21:04:45.644383] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:34.164 passed 00:05:34.164 Test: mem map translation ...[2024-07-14 21:04:45.706419] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:34.164 [2024-07-14 21:04:45.707202] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:34.164 [2024-07-14 21:04:45.708023] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:34.164 [2024-07-14 21:04:45.708773] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:34.422 passed 00:05:34.422 Test: mem map registration ...[2024-07-14 21:04:45.786447] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:34.422 [2024-07-14 21:04:45.786541] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:34.422 passed 00:05:34.422 Test: mem map adjacent registrations ...passed 00:05:34.422 00:05:34.422 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.422 suites 1 1 n/a 0 0 00:05:34.422 tests 4 4 4 0 0 00:05:34.422 asserts 152 152 152 0 n/a 00:05:34.422 00:05:34.422 Elapsed time = 0.281 seconds 00:05:34.422 00:05:34.422 real 0m0.330s 00:05:34.422 user 0m0.286s 00:05:34.422 sys 0m0.034s 00:05:34.422 21:04:45 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.422 21:04:45 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:34.422 ************************************ 00:05:34.422 END TEST env_memory 00:05:34.422 ************************************ 00:05:34.422 21:04:45 env -- common/autotest_common.sh@1142 -- # return 0 00:05:34.422 21:04:45 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:34.422 21:04:45 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.422 21:04:45 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.422 21:04:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.422 ************************************ 00:05:34.422 START TEST env_vtophys 00:05:34.422 ************************************ 00:05:34.422 21:04:45 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:34.682 EAL: lib.eal log level changed from notice to debug 00:05:34.682 EAL: Detected lcore 0 as core 0 on socket 0 00:05:34.682 EAL: Detected lcore 1 as core 0 on socket 0 00:05:34.682 EAL: Detected lcore 2 as core 0 on socket 0 00:05:34.682 EAL: Detected lcore 3 as core 0 on socket 0 00:05:34.682 EAL: Detected lcore 4 as core 0 on socket 0 00:05:34.682 EAL: Detected lcore 5 as core 0 on socket 0 00:05:34.682 EAL: Detected lcore 6 as core 0 on socket 0 00:05:34.682 EAL: Detected lcore 7 as core 0 on socket 0 00:05:34.682 EAL: Detected lcore 8 as core 0 on socket 0 00:05:34.682 EAL: Detected lcore 9 as core 0 on socket 0 00:05:34.682 EAL: Maximum logical cores by configuration: 128 00:05:34.682 EAL: Detected CPU lcores: 10 00:05:34.682 EAL: Detected NUMA nodes: 1 00:05:34.682 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:34.682 EAL: Detected shared linkage of DPDK 00:05:34.682 EAL: No shared files mode enabled, IPC will be disabled 00:05:34.682 EAL: Selected IOVA mode 'PA' 00:05:34.682 EAL: Probing VFIO support... 00:05:34.682 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:34.682 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:34.682 EAL: Ask a virtual area of 0x2e000 bytes 00:05:34.682 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:34.682 EAL: Setting up physically contiguous memory... 00:05:34.682 EAL: Setting maximum number of open files to 524288 00:05:34.682 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:34.682 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:34.682 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.682 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:34.682 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.682 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.682 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:34.682 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:34.682 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.682 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:34.682 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.682 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.682 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:34.682 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:34.682 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.682 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:34.682 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.682 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.682 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:34.682 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:34.682 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.682 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:34.682 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.682 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.682 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:34.682 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:34.682 EAL: Hugepages will be freed exactly as allocated. 00:05:34.682 EAL: No shared files mode enabled, IPC is disabled 00:05:34.682 EAL: No shared files mode enabled, IPC is disabled 00:05:34.682 EAL: TSC frequency is ~2200000 KHz 00:05:34.682 EAL: Main lcore 0 is ready (tid=7f698232ca40;cpuset=[0]) 00:05:34.682 EAL: Trying to obtain current memory policy. 00:05:34.682 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.682 EAL: Restoring previous memory policy: 0 00:05:34.682 EAL: request: mp_malloc_sync 00:05:34.682 EAL: No shared files mode enabled, IPC is disabled 00:05:34.682 EAL: Heap on socket 0 was expanded by 2MB 00:05:34.682 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:34.682 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:34.682 EAL: Mem event callback 'spdk:(nil)' registered 00:05:34.682 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:34.682 00:05:34.682 00:05:34.682 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.682 http://cunit.sourceforge.net/ 00:05:34.682 00:05:34.682 00:05:34.682 Suite: components_suite 00:05:35.250 Test: vtophys_malloc_test ...passed 00:05:35.250 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:35.250 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.250 EAL: Restoring previous memory policy: 4 00:05:35.250 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.250 EAL: request: mp_malloc_sync 00:05:35.250 EAL: No shared files mode enabled, IPC is disabled 00:05:35.250 EAL: Heap on socket 0 was expanded by 4MB 00:05:35.250 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.250 EAL: request: mp_malloc_sync 00:05:35.250 EAL: No shared files mode enabled, IPC is disabled 00:05:35.250 EAL: Heap on socket 0 was shrunk by 4MB 00:05:35.250 EAL: Trying to obtain current memory policy. 00:05:35.250 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.250 EAL: Restoring previous memory policy: 4 00:05:35.250 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.250 EAL: request: mp_malloc_sync 00:05:35.250 EAL: No shared files mode enabled, IPC is disabled 00:05:35.250 EAL: Heap on socket 0 was expanded by 6MB 00:05:35.250 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.250 EAL: request: mp_malloc_sync 00:05:35.250 EAL: No shared files mode enabled, IPC is disabled 00:05:35.250 EAL: Heap on socket 0 was shrunk by 6MB 00:05:35.250 EAL: Trying to obtain current memory policy. 00:05:35.250 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.250 EAL: Restoring previous memory policy: 4 00:05:35.250 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.250 EAL: request: mp_malloc_sync 00:05:35.250 EAL: No shared files mode enabled, IPC is disabled 00:05:35.250 EAL: Heap on socket 0 was expanded by 10MB 00:05:35.250 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.250 EAL: request: mp_malloc_sync 00:05:35.250 EAL: No shared files mode enabled, IPC is disabled 00:05:35.250 EAL: Heap on socket 0 was shrunk by 10MB 00:05:35.250 EAL: Trying to obtain current memory policy. 00:05:35.250 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.250 EAL: Restoring previous memory policy: 4 00:05:35.250 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.250 EAL: request: mp_malloc_sync 00:05:35.250 EAL: No shared files mode enabled, IPC is disabled 00:05:35.250 EAL: Heap on socket 0 was expanded by 18MB 00:05:35.250 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.250 EAL: request: mp_malloc_sync 00:05:35.250 EAL: No shared files mode enabled, IPC is disabled 00:05:35.250 EAL: Heap on socket 0 was shrunk by 18MB 00:05:35.250 EAL: Trying to obtain current memory policy. 00:05:35.250 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.250 EAL: Restoring previous memory policy: 4 00:05:35.250 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.250 EAL: request: mp_malloc_sync 00:05:35.250 EAL: No shared files mode enabled, IPC is disabled 00:05:35.250 EAL: Heap on socket 0 was expanded by 34MB 00:05:35.250 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.250 EAL: request: mp_malloc_sync 00:05:35.250 EAL: No shared files mode enabled, IPC is disabled 00:05:35.250 EAL: Heap on socket 0 was shrunk by 34MB 00:05:35.250 EAL: Trying to obtain current memory policy. 00:05:35.250 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.250 EAL: Restoring previous memory policy: 4 00:05:35.250 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.250 EAL: request: mp_malloc_sync 00:05:35.250 EAL: No shared files mode enabled, IPC is disabled 00:05:35.250 EAL: Heap on socket 0 was expanded by 66MB 00:05:35.508 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.508 EAL: request: mp_malloc_sync 00:05:35.508 EAL: No shared files mode enabled, IPC is disabled 00:05:35.508 EAL: Heap on socket 0 was shrunk by 66MB 00:05:35.508 EAL: Trying to obtain current memory policy. 00:05:35.508 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.508 EAL: Restoring previous memory policy: 4 00:05:35.508 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.508 EAL: request: mp_malloc_sync 00:05:35.508 EAL: No shared files mode enabled, IPC is disabled 00:05:35.508 EAL: Heap on socket 0 was expanded by 130MB 00:05:35.767 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.767 EAL: request: mp_malloc_sync 00:05:35.767 EAL: No shared files mode enabled, IPC is disabled 00:05:35.767 EAL: Heap on socket 0 was shrunk by 130MB 00:05:35.767 EAL: Trying to obtain current memory policy. 00:05:35.767 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.767 EAL: Restoring previous memory policy: 4 00:05:35.767 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.767 EAL: request: mp_malloc_sync 00:05:35.767 EAL: No shared files mode enabled, IPC is disabled 00:05:35.767 EAL: Heap on socket 0 was expanded by 258MB 00:05:36.335 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.335 EAL: request: mp_malloc_sync 00:05:36.335 EAL: No shared files mode enabled, IPC is disabled 00:05:36.335 EAL: Heap on socket 0 was shrunk by 258MB 00:05:36.594 EAL: Trying to obtain current memory policy. 00:05:36.594 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.594 EAL: Restoring previous memory policy: 4 00:05:36.594 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.594 EAL: request: mp_malloc_sync 00:05:36.594 EAL: No shared files mode enabled, IPC is disabled 00:05:36.594 EAL: Heap on socket 0 was expanded by 514MB 00:05:37.161 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.420 EAL: request: mp_malloc_sync 00:05:37.420 EAL: No shared files mode enabled, IPC is disabled 00:05:37.420 EAL: Heap on socket 0 was shrunk by 514MB 00:05:38.021 EAL: Trying to obtain current memory policy. 00:05:38.021 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.021 EAL: Restoring previous memory policy: 4 00:05:38.021 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.021 EAL: request: mp_malloc_sync 00:05:38.021 EAL: No shared files mode enabled, IPC is disabled 00:05:38.021 EAL: Heap on socket 0 was expanded by 1026MB 00:05:39.397 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.397 EAL: request: mp_malloc_sync 00:05:39.397 EAL: No shared files mode enabled, IPC is disabled 00:05:39.397 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:40.774 passed 00:05:40.774 00:05:40.774 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.774 suites 1 1 n/a 0 0 00:05:40.774 tests 2 2 2 0 0 00:05:40.774 asserts 5439 5439 5439 0 n/a 00:05:40.774 00:05:40.774 Elapsed time = 5.812 seconds 00:05:40.774 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.774 EAL: request: mp_malloc_sync 00:05:40.774 EAL: No shared files mode enabled, IPC is disabled 00:05:40.774 EAL: Heap on socket 0 was shrunk by 2MB 00:05:40.774 EAL: No shared files mode enabled, IPC is disabled 00:05:40.774 EAL: No shared files mode enabled, IPC is disabled 00:05:40.774 EAL: No shared files mode enabled, IPC is disabled 00:05:40.774 00:05:40.774 real 0m6.125s 00:05:40.774 user 0m5.274s 00:05:40.774 sys 0m0.695s 00:05:40.774 21:04:52 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.774 ************************************ 00:05:40.774 END TEST env_vtophys 00:05:40.774 ************************************ 00:05:40.774 21:04:52 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:40.774 21:04:52 env -- common/autotest_common.sh@1142 -- # return 0 00:05:40.774 21:04:52 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:40.774 21:04:52 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.774 21:04:52 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.774 21:04:52 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.774 ************************************ 00:05:40.774 START TEST env_pci 00:05:40.774 ************************************ 00:05:40.774 21:04:52 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:40.774 00:05:40.774 00:05:40.774 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.774 http://cunit.sourceforge.net/ 00:05:40.774 00:05:40.774 00:05:40.774 Suite: pci 00:05:40.774 Test: pci_hook ...[2024-07-14 21:04:52.158992] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 61833 has claimed it 00:05:40.774 passed 00:05:40.774 00:05:40.774 EAL: Cannot find device (10000:00:01.0) 00:05:40.774 EAL: Failed to attach device on primary process 00:05:40.774 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.774 suites 1 1 n/a 0 0 00:05:40.774 tests 1 1 1 0 0 00:05:40.774 asserts 25 25 25 0 n/a 00:05:40.774 00:05:40.774 Elapsed time = 0.007 seconds 00:05:40.774 00:05:40.774 real 0m0.076s 00:05:40.774 user 0m0.041s 00:05:40.774 sys 0m0.035s 00:05:40.774 ************************************ 00:05:40.774 END TEST env_pci 00:05:40.774 ************************************ 00:05:40.774 21:04:52 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.774 21:04:52 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:40.774 21:04:52 env -- common/autotest_common.sh@1142 -- # return 0 00:05:40.774 21:04:52 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:40.774 21:04:52 env -- env/env.sh@15 -- # uname 00:05:40.774 21:04:52 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:40.774 21:04:52 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:40.774 21:04:52 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:40.774 21:04:52 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:40.774 21:04:52 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.774 21:04:52 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.774 ************************************ 00:05:40.774 START TEST env_dpdk_post_init 00:05:40.774 ************************************ 00:05:40.774 21:04:52 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:41.033 EAL: Detected CPU lcores: 10 00:05:41.033 EAL: Detected NUMA nodes: 1 00:05:41.033 EAL: Detected shared linkage of DPDK 00:05:41.033 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:41.033 EAL: Selected IOVA mode 'PA' 00:05:41.033 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:41.033 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:41.033 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:41.034 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:41.034 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:41.034 Starting DPDK initialization... 00:05:41.034 Starting SPDK post initialization... 00:05:41.034 SPDK NVMe probe 00:05:41.034 Attaching to 0000:00:10.0 00:05:41.034 Attaching to 0000:00:11.0 00:05:41.034 Attaching to 0000:00:12.0 00:05:41.034 Attaching to 0000:00:13.0 00:05:41.034 Attached to 0000:00:10.0 00:05:41.034 Attached to 0000:00:11.0 00:05:41.034 Attached to 0000:00:13.0 00:05:41.034 Attached to 0000:00:12.0 00:05:41.034 Cleaning up... 00:05:41.034 00:05:41.034 real 0m0.290s 00:05:41.034 user 0m0.096s 00:05:41.034 sys 0m0.097s 00:05:41.034 21:04:52 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.034 21:04:52 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:41.034 ************************************ 00:05:41.034 END TEST env_dpdk_post_init 00:05:41.034 ************************************ 00:05:41.293 21:04:52 env -- common/autotest_common.sh@1142 -- # return 0 00:05:41.293 21:04:52 env -- env/env.sh@26 -- # uname 00:05:41.293 21:04:52 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:41.293 21:04:52 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:41.293 21:04:52 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.293 21:04:52 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.293 21:04:52 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.293 ************************************ 00:05:41.293 START TEST env_mem_callbacks 00:05:41.293 ************************************ 00:05:41.293 21:04:52 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:41.293 EAL: Detected CPU lcores: 10 00:05:41.293 EAL: Detected NUMA nodes: 1 00:05:41.293 EAL: Detected shared linkage of DPDK 00:05:41.293 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:41.293 EAL: Selected IOVA mode 'PA' 00:05:41.293 00:05:41.293 00:05:41.293 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.293 http://cunit.sourceforge.net/ 00:05:41.293 00:05:41.293 00:05:41.293 Suite: memory 00:05:41.293 Test: test ... 00:05:41.293 register 0x200000200000 2097152 00:05:41.293 malloc 3145728 00:05:41.293 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:41.293 register 0x200000400000 4194304 00:05:41.293 buf 0x2000004fffc0 len 3145728 PASSED 00:05:41.293 malloc 64 00:05:41.293 buf 0x2000004ffec0 len 64 PASSED 00:05:41.293 malloc 4194304 00:05:41.293 register 0x200000800000 6291456 00:05:41.293 buf 0x2000009fffc0 len 4194304 PASSED 00:05:41.293 free 0x2000004fffc0 3145728 00:05:41.293 free 0x2000004ffec0 64 00:05:41.293 unregister 0x200000400000 4194304 PASSED 00:05:41.293 free 0x2000009fffc0 4194304 00:05:41.293 unregister 0x200000800000 6291456 PASSED 00:05:41.293 malloc 8388608 00:05:41.293 register 0x200000400000 10485760 00:05:41.293 buf 0x2000005fffc0 len 8388608 PASSED 00:05:41.293 free 0x2000005fffc0 8388608 00:05:41.293 unregister 0x200000400000 10485760 PASSED 00:05:41.293 passed 00:05:41.293 00:05:41.293 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.293 suites 1 1 n/a 0 0 00:05:41.293 tests 1 1 1 0 0 00:05:41.293 asserts 15 15 15 0 n/a 00:05:41.293 00:05:41.293 Elapsed time = 0.052 seconds 00:05:41.552 00:05:41.552 real 0m0.249s 00:05:41.552 user 0m0.089s 00:05:41.552 sys 0m0.059s 00:05:41.552 ************************************ 00:05:41.552 END TEST env_mem_callbacks 00:05:41.552 ************************************ 00:05:41.552 21:04:52 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.552 21:04:52 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:41.552 21:04:52 env -- common/autotest_common.sh@1142 -- # return 0 00:05:41.552 00:05:41.552 real 0m7.415s 00:05:41.552 user 0m5.898s 00:05:41.552 sys 0m1.131s 00:05:41.552 21:04:52 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.552 21:04:52 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.553 ************************************ 00:05:41.553 END TEST env 00:05:41.553 ************************************ 00:05:41.553 21:04:52 -- common/autotest_common.sh@1142 -- # return 0 00:05:41.553 21:04:52 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:41.553 21:04:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.553 21:04:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.553 21:04:52 -- common/autotest_common.sh@10 -- # set +x 00:05:41.553 ************************************ 00:05:41.553 START TEST rpc 00:05:41.553 ************************************ 00:05:41.553 21:04:52 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:41.553 * Looking for test storage... 00:05:41.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:41.553 21:04:53 rpc -- rpc/rpc.sh@65 -- # spdk_pid=61946 00:05:41.553 21:04:53 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:41.553 21:04:53 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.553 21:04:53 rpc -- rpc/rpc.sh@67 -- # waitforlisten 61946 00:05:41.553 21:04:53 rpc -- common/autotest_common.sh@829 -- # '[' -z 61946 ']' 00:05:41.553 21:04:53 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.553 21:04:53 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.553 21:04:53 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.553 21:04:53 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.553 21:04:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.812 [2024-07-14 21:04:53.111655] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:41.812 [2024-07-14 21:04:53.111853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61946 ] 00:05:41.812 [2024-07-14 21:04:53.270179] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.070 [2024-07-14 21:04:53.427232] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:42.070 [2024-07-14 21:04:53.427304] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 61946' to capture a snapshot of events at runtime. 00:05:42.070 [2024-07-14 21:04:53.427327] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:42.070 [2024-07-14 21:04:53.427345] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:42.070 [2024-07-14 21:04:53.427362] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid61946 for offline analysis/debug. 00:05:42.070 [2024-07-14 21:04:53.427419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.638 21:04:54 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.638 21:04:54 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:42.638 21:04:54 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:42.638 21:04:54 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:42.638 21:04:54 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:42.638 21:04:54 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:42.638 21:04:54 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.638 21:04:54 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.638 21:04:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.638 ************************************ 00:05:42.638 START TEST rpc_integrity 00:05:42.638 ************************************ 00:05:42.638 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:42.638 21:04:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.638 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.638 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.638 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.638 21:04:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.638 21:04:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:42.898 21:04:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.898 21:04:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.898 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.898 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.898 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.898 21:04:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:42.898 21:04:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.898 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.898 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.898 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.898 21:04:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.898 { 00:05:42.898 "name": "Malloc0", 00:05:42.898 "aliases": [ 00:05:42.898 "afb42ca3-ff7c-4a74-82dd-568dc95b20cb" 00:05:42.898 ], 00:05:42.898 "product_name": "Malloc disk", 00:05:42.898 "block_size": 512, 00:05:42.898 "num_blocks": 16384, 00:05:42.898 "uuid": "afb42ca3-ff7c-4a74-82dd-568dc95b20cb", 00:05:42.898 "assigned_rate_limits": { 00:05:42.898 "rw_ios_per_sec": 0, 00:05:42.898 "rw_mbytes_per_sec": 0, 00:05:42.898 "r_mbytes_per_sec": 0, 00:05:42.898 "w_mbytes_per_sec": 0 00:05:42.898 }, 00:05:42.898 "claimed": false, 00:05:42.898 "zoned": false, 00:05:42.898 "supported_io_types": { 00:05:42.898 "read": true, 00:05:42.898 "write": true, 00:05:42.898 "unmap": true, 00:05:42.898 "flush": true, 00:05:42.898 "reset": true, 00:05:42.898 "nvme_admin": false, 00:05:42.898 "nvme_io": false, 00:05:42.898 "nvme_io_md": false, 00:05:42.898 "write_zeroes": true, 00:05:42.898 "zcopy": true, 00:05:42.898 "get_zone_info": false, 00:05:42.898 "zone_management": false, 00:05:42.898 "zone_append": false, 00:05:42.898 "compare": false, 00:05:42.898 "compare_and_write": false, 00:05:42.898 "abort": true, 00:05:42.898 "seek_hole": false, 00:05:42.898 "seek_data": false, 00:05:42.898 "copy": true, 00:05:42.898 "nvme_iov_md": false 00:05:42.898 }, 00:05:42.898 "memory_domains": [ 00:05:42.898 { 00:05:42.898 "dma_device_id": "system", 00:05:42.898 "dma_device_type": 1 00:05:42.898 }, 00:05:42.898 { 00:05:42.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.898 "dma_device_type": 2 00:05:42.898 } 00:05:42.898 ], 00:05:42.898 "driver_specific": {} 00:05:42.898 } 00:05:42.898 ]' 00:05:42.898 21:04:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:42.898 21:04:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.898 21:04:54 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:42.898 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.898 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.898 [2024-07-14 21:04:54.288689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:42.898 [2024-07-14 21:04:54.288822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.898 [2024-07-14 21:04:54.288864] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:42.898 [2024-07-14 21:04:54.288880] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.898 [2024-07-14 21:04:54.291312] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.898 [2024-07-14 21:04:54.291365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.898 Passthru0 00:05:42.898 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.898 21:04:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.898 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.898 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.898 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.898 21:04:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.898 { 00:05:42.898 "name": "Malloc0", 00:05:42.898 "aliases": [ 00:05:42.898 "afb42ca3-ff7c-4a74-82dd-568dc95b20cb" 00:05:42.898 ], 00:05:42.898 "product_name": "Malloc disk", 00:05:42.898 "block_size": 512, 00:05:42.898 "num_blocks": 16384, 00:05:42.898 "uuid": "afb42ca3-ff7c-4a74-82dd-568dc95b20cb", 00:05:42.898 "assigned_rate_limits": { 00:05:42.898 "rw_ios_per_sec": 0, 00:05:42.898 "rw_mbytes_per_sec": 0, 00:05:42.898 "r_mbytes_per_sec": 0, 00:05:42.898 "w_mbytes_per_sec": 0 00:05:42.899 }, 00:05:42.899 "claimed": true, 00:05:42.899 "claim_type": "exclusive_write", 00:05:42.899 "zoned": false, 00:05:42.899 "supported_io_types": { 00:05:42.899 "read": true, 00:05:42.899 "write": true, 00:05:42.899 "unmap": true, 00:05:42.899 "flush": true, 00:05:42.899 "reset": true, 00:05:42.899 "nvme_admin": false, 00:05:42.899 "nvme_io": false, 00:05:42.899 "nvme_io_md": false, 00:05:42.899 "write_zeroes": true, 00:05:42.899 "zcopy": true, 00:05:42.899 "get_zone_info": false, 00:05:42.899 "zone_management": false, 00:05:42.899 "zone_append": false, 00:05:42.899 "compare": false, 00:05:42.899 "compare_and_write": false, 00:05:42.899 "abort": true, 00:05:42.899 "seek_hole": false, 00:05:42.899 "seek_data": false, 00:05:42.899 "copy": true, 00:05:42.899 "nvme_iov_md": false 00:05:42.899 }, 00:05:42.899 "memory_domains": [ 00:05:42.899 { 00:05:42.899 "dma_device_id": "system", 00:05:42.899 "dma_device_type": 1 00:05:42.899 }, 00:05:42.899 { 00:05:42.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.899 "dma_device_type": 2 00:05:42.899 } 00:05:42.899 ], 00:05:42.899 "driver_specific": {} 00:05:42.899 }, 00:05:42.899 { 00:05:42.899 "name": "Passthru0", 00:05:42.899 "aliases": [ 00:05:42.899 "8ddc900e-cf64-50e0-9551-0c7a72ab608a" 00:05:42.899 ], 00:05:42.899 "product_name": "passthru", 00:05:42.899 "block_size": 512, 00:05:42.899 "num_blocks": 16384, 00:05:42.899 "uuid": "8ddc900e-cf64-50e0-9551-0c7a72ab608a", 00:05:42.899 "assigned_rate_limits": { 00:05:42.899 "rw_ios_per_sec": 0, 00:05:42.899 "rw_mbytes_per_sec": 0, 00:05:42.899 "r_mbytes_per_sec": 0, 00:05:42.899 "w_mbytes_per_sec": 0 00:05:42.899 }, 00:05:42.899 "claimed": false, 00:05:42.899 "zoned": false, 00:05:42.899 "supported_io_types": { 00:05:42.899 "read": true, 00:05:42.899 "write": true, 00:05:42.899 "unmap": true, 00:05:42.899 "flush": true, 00:05:42.899 "reset": true, 00:05:42.899 "nvme_admin": false, 00:05:42.899 "nvme_io": false, 00:05:42.899 "nvme_io_md": false, 00:05:42.899 "write_zeroes": true, 00:05:42.899 "zcopy": true, 00:05:42.899 "get_zone_info": false, 00:05:42.899 "zone_management": false, 00:05:42.899 "zone_append": false, 00:05:42.899 "compare": false, 00:05:42.899 "compare_and_write": false, 00:05:42.899 "abort": true, 00:05:42.899 "seek_hole": false, 00:05:42.899 "seek_data": false, 00:05:42.899 "copy": true, 00:05:42.899 "nvme_iov_md": false 00:05:42.899 }, 00:05:42.899 "memory_domains": [ 00:05:42.899 { 00:05:42.899 "dma_device_id": "system", 00:05:42.899 "dma_device_type": 1 00:05:42.899 }, 00:05:42.899 { 00:05:42.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.899 "dma_device_type": 2 00:05:42.899 } 00:05:42.899 ], 00:05:42.899 "driver_specific": { 00:05:42.899 "passthru": { 00:05:42.899 "name": "Passthru0", 00:05:42.899 "base_bdev_name": "Malloc0" 00:05:42.899 } 00:05:42.899 } 00:05:42.899 } 00:05:42.899 ]' 00:05:42.899 21:04:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:42.899 21:04:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.899 21:04:54 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.899 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.899 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.899 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.899 21:04:54 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:42.899 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.899 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.899 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.899 21:04:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:42.899 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.899 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.899 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.899 21:04:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:42.899 21:04:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:43.158 21:04:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:43.158 00:05:43.158 real 0m0.360s 00:05:43.158 user 0m0.218s 00:05:43.158 sys 0m0.046s 00:05:43.158 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.158 21:04:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.158 ************************************ 00:05:43.158 END TEST rpc_integrity 00:05:43.158 ************************************ 00:05:43.158 21:04:54 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:43.158 21:04:54 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:43.158 21:04:54 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.158 21:04:54 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.158 21:04:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.158 ************************************ 00:05:43.158 START TEST rpc_plugins 00:05:43.158 ************************************ 00:05:43.158 21:04:54 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:43.158 21:04:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:43.158 21:04:54 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.158 21:04:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:43.158 21:04:54 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.158 21:04:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:43.158 21:04:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:43.158 21:04:54 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.158 21:04:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:43.158 21:04:54 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.158 21:04:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:43.158 { 00:05:43.158 "name": "Malloc1", 00:05:43.158 "aliases": [ 00:05:43.158 "746f1df2-4eae-47c8-a516-635c84465f2a" 00:05:43.158 ], 00:05:43.158 "product_name": "Malloc disk", 00:05:43.158 "block_size": 4096, 00:05:43.158 "num_blocks": 256, 00:05:43.158 "uuid": "746f1df2-4eae-47c8-a516-635c84465f2a", 00:05:43.158 "assigned_rate_limits": { 00:05:43.158 "rw_ios_per_sec": 0, 00:05:43.158 "rw_mbytes_per_sec": 0, 00:05:43.158 "r_mbytes_per_sec": 0, 00:05:43.158 "w_mbytes_per_sec": 0 00:05:43.158 }, 00:05:43.158 "claimed": false, 00:05:43.158 "zoned": false, 00:05:43.158 "supported_io_types": { 00:05:43.158 "read": true, 00:05:43.158 "write": true, 00:05:43.158 "unmap": true, 00:05:43.158 "flush": true, 00:05:43.158 "reset": true, 00:05:43.158 "nvme_admin": false, 00:05:43.158 "nvme_io": false, 00:05:43.158 "nvme_io_md": false, 00:05:43.158 "write_zeroes": true, 00:05:43.158 "zcopy": true, 00:05:43.158 "get_zone_info": false, 00:05:43.158 "zone_management": false, 00:05:43.158 "zone_append": false, 00:05:43.159 "compare": false, 00:05:43.159 "compare_and_write": false, 00:05:43.159 "abort": true, 00:05:43.159 "seek_hole": false, 00:05:43.159 "seek_data": false, 00:05:43.159 "copy": true, 00:05:43.159 "nvme_iov_md": false 00:05:43.159 }, 00:05:43.159 "memory_domains": [ 00:05:43.159 { 00:05:43.159 "dma_device_id": "system", 00:05:43.159 "dma_device_type": 1 00:05:43.159 }, 00:05:43.159 { 00:05:43.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.159 "dma_device_type": 2 00:05:43.159 } 00:05:43.159 ], 00:05:43.159 "driver_specific": {} 00:05:43.159 } 00:05:43.159 ]' 00:05:43.159 21:04:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:43.159 21:04:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:43.159 21:04:54 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:43.159 21:04:54 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.159 21:04:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:43.159 21:04:54 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.159 21:04:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:43.159 21:04:54 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.159 21:04:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:43.159 21:04:54 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.159 21:04:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:43.159 21:04:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:43.159 21:04:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:43.159 00:05:43.159 real 0m0.164s 00:05:43.159 user 0m0.109s 00:05:43.159 sys 0m0.018s 00:05:43.159 21:04:54 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.159 ************************************ 00:05:43.159 END TEST rpc_plugins 00:05:43.159 ************************************ 00:05:43.159 21:04:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:43.418 21:04:54 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:43.418 21:04:54 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:43.418 21:04:54 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.418 21:04:54 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.418 21:04:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.418 ************************************ 00:05:43.418 START TEST rpc_trace_cmd_test 00:05:43.418 ************************************ 00:05:43.418 21:04:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:43.418 21:04:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:43.418 21:04:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:43.418 21:04:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.418 21:04:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.418 21:04:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.418 21:04:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:43.418 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid61946", 00:05:43.418 "tpoint_group_mask": "0x8", 00:05:43.418 "iscsi_conn": { 00:05:43.418 "mask": "0x2", 00:05:43.418 "tpoint_mask": "0x0" 00:05:43.418 }, 00:05:43.418 "scsi": { 00:05:43.418 "mask": "0x4", 00:05:43.418 "tpoint_mask": "0x0" 00:05:43.418 }, 00:05:43.419 "bdev": { 00:05:43.419 "mask": "0x8", 00:05:43.419 "tpoint_mask": "0xffffffffffffffff" 00:05:43.419 }, 00:05:43.419 "nvmf_rdma": { 00:05:43.419 "mask": "0x10", 00:05:43.419 "tpoint_mask": "0x0" 00:05:43.419 }, 00:05:43.419 "nvmf_tcp": { 00:05:43.419 "mask": "0x20", 00:05:43.419 "tpoint_mask": "0x0" 00:05:43.419 }, 00:05:43.419 "ftl": { 00:05:43.419 "mask": "0x40", 00:05:43.419 "tpoint_mask": "0x0" 00:05:43.419 }, 00:05:43.419 "blobfs": { 00:05:43.419 "mask": "0x80", 00:05:43.419 "tpoint_mask": "0x0" 00:05:43.419 }, 00:05:43.419 "dsa": { 00:05:43.419 "mask": "0x200", 00:05:43.419 "tpoint_mask": "0x0" 00:05:43.419 }, 00:05:43.419 "thread": { 00:05:43.419 "mask": "0x400", 00:05:43.419 "tpoint_mask": "0x0" 00:05:43.419 }, 00:05:43.419 "nvme_pcie": { 00:05:43.419 "mask": "0x800", 00:05:43.419 "tpoint_mask": "0x0" 00:05:43.419 }, 00:05:43.419 "iaa": { 00:05:43.419 "mask": "0x1000", 00:05:43.419 "tpoint_mask": "0x0" 00:05:43.419 }, 00:05:43.419 "nvme_tcp": { 00:05:43.419 "mask": "0x2000", 00:05:43.419 "tpoint_mask": "0x0" 00:05:43.419 }, 00:05:43.419 "bdev_nvme": { 00:05:43.419 "mask": "0x4000", 00:05:43.419 "tpoint_mask": "0x0" 00:05:43.419 }, 00:05:43.419 "sock": { 00:05:43.419 "mask": "0x8000", 00:05:43.419 "tpoint_mask": "0x0" 00:05:43.419 } 00:05:43.419 }' 00:05:43.419 21:04:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:43.419 21:04:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:43.419 21:04:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:43.419 21:04:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:43.419 21:04:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:43.419 21:04:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:43.419 21:04:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:43.678 21:04:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:43.678 21:04:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:43.678 21:04:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:43.678 00:05:43.678 real 0m0.281s 00:05:43.678 user 0m0.248s 00:05:43.678 sys 0m0.023s 00:05:43.678 21:04:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.678 ************************************ 00:05:43.678 END TEST rpc_trace_cmd_test 00:05:43.678 21:04:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.678 ************************************ 00:05:43.678 21:04:55 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:43.678 21:04:55 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:43.678 21:04:55 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:43.678 21:04:55 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:43.678 21:04:55 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.678 21:04:55 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.678 21:04:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.678 ************************************ 00:05:43.678 START TEST rpc_daemon_integrity 00:05:43.678 ************************************ 00:05:43.678 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:43.678 21:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:43.678 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.678 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.678 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.678 21:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:43.678 21:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:43.678 21:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:43.678 21:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:43.678 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.678 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.678 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.678 21:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:43.678 21:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:43.678 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.678 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.678 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.678 21:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:43.678 { 00:05:43.678 "name": "Malloc2", 00:05:43.678 "aliases": [ 00:05:43.678 "a70343da-3666-48b8-9803-0430342f1260" 00:05:43.678 ], 00:05:43.678 "product_name": "Malloc disk", 00:05:43.678 "block_size": 512, 00:05:43.678 "num_blocks": 16384, 00:05:43.678 "uuid": "a70343da-3666-48b8-9803-0430342f1260", 00:05:43.678 "assigned_rate_limits": { 00:05:43.678 "rw_ios_per_sec": 0, 00:05:43.678 "rw_mbytes_per_sec": 0, 00:05:43.678 "r_mbytes_per_sec": 0, 00:05:43.678 "w_mbytes_per_sec": 0 00:05:43.678 }, 00:05:43.678 "claimed": false, 00:05:43.678 "zoned": false, 00:05:43.678 "supported_io_types": { 00:05:43.678 "read": true, 00:05:43.678 "write": true, 00:05:43.678 "unmap": true, 00:05:43.678 "flush": true, 00:05:43.678 "reset": true, 00:05:43.678 "nvme_admin": false, 00:05:43.678 "nvme_io": false, 00:05:43.678 "nvme_io_md": false, 00:05:43.678 "write_zeroes": true, 00:05:43.678 "zcopy": true, 00:05:43.678 "get_zone_info": false, 00:05:43.678 "zone_management": false, 00:05:43.678 "zone_append": false, 00:05:43.678 "compare": false, 00:05:43.678 "compare_and_write": false, 00:05:43.678 "abort": true, 00:05:43.678 "seek_hole": false, 00:05:43.678 "seek_data": false, 00:05:43.678 "copy": true, 00:05:43.678 "nvme_iov_md": false 00:05:43.678 }, 00:05:43.678 "memory_domains": [ 00:05:43.678 { 00:05:43.678 "dma_device_id": "system", 00:05:43.678 "dma_device_type": 1 00:05:43.678 }, 00:05:43.678 { 00:05:43.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.678 "dma_device_type": 2 00:05:43.678 } 00:05:43.678 ], 00:05:43.678 "driver_specific": {} 00:05:43.678 } 00:05:43.678 ]' 00:05:43.678 21:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:43.937 21:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:43.937 21:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:43.937 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.937 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.937 [2024-07-14 21:04:55.249791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:43.937 [2024-07-14 21:04:55.249915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:43.937 [2024-07-14 21:04:55.249955] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:43.937 [2024-07-14 21:04:55.249972] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:43.937 [2024-07-14 21:04:55.252607] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:43.937 [2024-07-14 21:04:55.252662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:43.937 Passthru0 00:05:43.937 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.937 21:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:43.937 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.937 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.937 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.937 21:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:43.937 { 00:05:43.937 "name": "Malloc2", 00:05:43.937 "aliases": [ 00:05:43.937 "a70343da-3666-48b8-9803-0430342f1260" 00:05:43.937 ], 00:05:43.937 "product_name": "Malloc disk", 00:05:43.937 "block_size": 512, 00:05:43.937 "num_blocks": 16384, 00:05:43.937 "uuid": "a70343da-3666-48b8-9803-0430342f1260", 00:05:43.937 "assigned_rate_limits": { 00:05:43.937 "rw_ios_per_sec": 0, 00:05:43.937 "rw_mbytes_per_sec": 0, 00:05:43.938 "r_mbytes_per_sec": 0, 00:05:43.938 "w_mbytes_per_sec": 0 00:05:43.938 }, 00:05:43.938 "claimed": true, 00:05:43.938 "claim_type": "exclusive_write", 00:05:43.938 "zoned": false, 00:05:43.938 "supported_io_types": { 00:05:43.938 "read": true, 00:05:43.938 "write": true, 00:05:43.938 "unmap": true, 00:05:43.938 "flush": true, 00:05:43.938 "reset": true, 00:05:43.938 "nvme_admin": false, 00:05:43.938 "nvme_io": false, 00:05:43.938 "nvme_io_md": false, 00:05:43.938 "write_zeroes": true, 00:05:43.938 "zcopy": true, 00:05:43.938 "get_zone_info": false, 00:05:43.938 "zone_management": false, 00:05:43.938 "zone_append": false, 00:05:43.938 "compare": false, 00:05:43.938 "compare_and_write": false, 00:05:43.938 "abort": true, 00:05:43.938 "seek_hole": false, 00:05:43.938 "seek_data": false, 00:05:43.938 "copy": true, 00:05:43.938 "nvme_iov_md": false 00:05:43.938 }, 00:05:43.938 "memory_domains": [ 00:05:43.938 { 00:05:43.938 "dma_device_id": "system", 00:05:43.938 "dma_device_type": 1 00:05:43.938 }, 00:05:43.938 { 00:05:43.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.938 "dma_device_type": 2 00:05:43.938 } 00:05:43.938 ], 00:05:43.938 "driver_specific": {} 00:05:43.938 }, 00:05:43.938 { 00:05:43.938 "name": "Passthru0", 00:05:43.938 "aliases": [ 00:05:43.938 "4c16b752-55cd-5def-8bec-a4148787ed1b" 00:05:43.938 ], 00:05:43.938 "product_name": "passthru", 00:05:43.938 "block_size": 512, 00:05:43.938 "num_blocks": 16384, 00:05:43.938 "uuid": "4c16b752-55cd-5def-8bec-a4148787ed1b", 00:05:43.938 "assigned_rate_limits": { 00:05:43.938 "rw_ios_per_sec": 0, 00:05:43.938 "rw_mbytes_per_sec": 0, 00:05:43.938 "r_mbytes_per_sec": 0, 00:05:43.938 "w_mbytes_per_sec": 0 00:05:43.938 }, 00:05:43.938 "claimed": false, 00:05:43.938 "zoned": false, 00:05:43.938 "supported_io_types": { 00:05:43.938 "read": true, 00:05:43.938 "write": true, 00:05:43.938 "unmap": true, 00:05:43.938 "flush": true, 00:05:43.938 "reset": true, 00:05:43.938 "nvme_admin": false, 00:05:43.938 "nvme_io": false, 00:05:43.938 "nvme_io_md": false, 00:05:43.938 "write_zeroes": true, 00:05:43.938 "zcopy": true, 00:05:43.938 "get_zone_info": false, 00:05:43.938 "zone_management": false, 00:05:43.938 "zone_append": false, 00:05:43.938 "compare": false, 00:05:43.938 "compare_and_write": false, 00:05:43.938 "abort": true, 00:05:43.938 "seek_hole": false, 00:05:43.938 "seek_data": false, 00:05:43.938 "copy": true, 00:05:43.938 "nvme_iov_md": false 00:05:43.938 }, 00:05:43.938 "memory_domains": [ 00:05:43.938 { 00:05:43.938 "dma_device_id": "system", 00:05:43.938 "dma_device_type": 1 00:05:43.938 }, 00:05:43.938 { 00:05:43.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.938 "dma_device_type": 2 00:05:43.938 } 00:05:43.938 ], 00:05:43.938 "driver_specific": { 00:05:43.938 "passthru": { 00:05:43.938 "name": "Passthru0", 00:05:43.938 "base_bdev_name": "Malloc2" 00:05:43.938 } 00:05:43.938 } 00:05:43.938 } 00:05:43.938 ]' 00:05:43.938 21:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:43.938 21:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:43.938 21:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:43.938 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.938 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.938 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.938 21:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:43.938 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.938 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.938 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.938 21:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:43.938 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.938 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.938 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.938 21:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:43.938 21:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:43.938 21:04:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:43.938 00:05:43.938 real 0m0.344s 00:05:43.938 user 0m0.218s 00:05:43.938 sys 0m0.039s 00:05:43.938 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.938 21:04:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.938 ************************************ 00:05:43.938 END TEST rpc_daemon_integrity 00:05:43.938 ************************************ 00:05:43.938 21:04:55 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:43.938 21:04:55 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:43.938 21:04:55 rpc -- rpc/rpc.sh@84 -- # killprocess 61946 00:05:43.938 21:04:55 rpc -- common/autotest_common.sh@948 -- # '[' -z 61946 ']' 00:05:43.938 21:04:55 rpc -- common/autotest_common.sh@952 -- # kill -0 61946 00:05:43.938 21:04:55 rpc -- common/autotest_common.sh@953 -- # uname 00:05:43.938 21:04:55 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.938 21:04:55 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61946 00:05:44.197 21:04:55 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.197 21:04:55 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.197 killing process with pid 61946 00:05:44.197 21:04:55 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61946' 00:05:44.197 21:04:55 rpc -- common/autotest_common.sh@967 -- # kill 61946 00:05:44.197 21:04:55 rpc -- common/autotest_common.sh@972 -- # wait 61946 00:05:46.100 00:05:46.100 real 0m4.337s 00:05:46.100 user 0m5.247s 00:05:46.100 sys 0m0.674s 00:05:46.100 21:04:57 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.100 21:04:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.100 ************************************ 00:05:46.100 END TEST rpc 00:05:46.100 ************************************ 00:05:46.100 21:04:57 -- common/autotest_common.sh@1142 -- # return 0 00:05:46.100 21:04:57 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:46.100 21:04:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.100 21:04:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.100 21:04:57 -- common/autotest_common.sh@10 -- # set +x 00:05:46.100 ************************************ 00:05:46.100 START TEST skip_rpc 00:05:46.100 ************************************ 00:05:46.100 21:04:57 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:46.100 * Looking for test storage... 00:05:46.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:46.100 21:04:57 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:46.100 21:04:57 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:46.100 21:04:57 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:46.100 21:04:57 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.100 21:04:57 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.100 21:04:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.100 ************************************ 00:05:46.100 START TEST skip_rpc 00:05:46.100 ************************************ 00:05:46.100 21:04:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:46.100 21:04:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=62162 00:05:46.100 21:04:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.100 21:04:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:46.100 21:04:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:46.100 [2024-07-14 21:04:57.545512] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:46.100 [2024-07-14 21:04:57.545687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62162 ] 00:05:46.360 [2024-07-14 21:04:57.717723] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.360 [2024-07-14 21:04:57.871058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.632 21:05:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:51.632 21:05:02 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:51.632 21:05:02 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:51.632 21:05:02 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:51.632 21:05:02 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.632 21:05:02 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:51.632 21:05:02 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.632 21:05:02 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:51.632 21:05:02 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.632 21:05:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.632 21:05:02 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:51.632 21:05:02 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:51.632 21:05:02 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.632 21:05:02 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:51.632 21:05:02 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.632 21:05:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:51.632 21:05:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 62162 00:05:51.632 21:05:02 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 62162 ']' 00:05:51.632 21:05:02 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 62162 00:05:51.632 21:05:02 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:51.632 21:05:02 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.633 21:05:02 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62162 00:05:51.633 21:05:02 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.633 21:05:02 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.633 killing process with pid 62162 00:05:51.633 21:05:02 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62162' 00:05:51.633 21:05:02 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 62162 00:05:51.633 21:05:02 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 62162 00:05:53.006 00:05:53.006 real 0m6.780s 00:05:53.006 user 0m6.379s 00:05:53.006 sys 0m0.303s 00:05:53.006 21:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.006 21:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.006 ************************************ 00:05:53.006 END TEST skip_rpc 00:05:53.006 ************************************ 00:05:53.006 21:05:04 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:53.006 21:05:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:53.007 21:05:04 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.007 21:05:04 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.007 21:05:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.007 ************************************ 00:05:53.007 START TEST skip_rpc_with_json 00:05:53.007 ************************************ 00:05:53.007 21:05:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:53.007 21:05:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:53.007 21:05:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=62260 00:05:53.007 21:05:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.007 21:05:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.007 21:05:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 62260 00:05:53.007 21:05:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 62260 ']' 00:05:53.007 21:05:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.007 21:05:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.007 21:05:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.007 21:05:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.007 21:05:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.007 [2024-07-14 21:05:04.377375] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:53.007 [2024-07-14 21:05:04.377559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62260 ] 00:05:53.273 [2024-07-14 21:05:04.555206] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.273 [2024-07-14 21:05:04.752104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.854 21:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.854 21:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:53.854 21:05:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:53.854 21:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.854 21:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.854 [2024-07-14 21:05:05.360247] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:53.854 request: 00:05:53.854 { 00:05:53.854 "trtype": "tcp", 00:05:53.854 "method": "nvmf_get_transports", 00:05:53.854 "req_id": 1 00:05:53.854 } 00:05:53.854 Got JSON-RPC error response 00:05:53.854 response: 00:05:53.854 { 00:05:53.854 "code": -19, 00:05:53.854 "message": "No such device" 00:05:53.854 } 00:05:53.854 21:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:53.854 21:05:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:53.854 21:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.854 21:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.854 [2024-07-14 21:05:05.376394] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:53.854 21:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.854 21:05:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:53.854 21:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.854 21:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.113 21:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.113 21:05:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:54.113 { 00:05:54.113 "subsystems": [ 00:05:54.113 { 00:05:54.113 "subsystem": "keyring", 00:05:54.113 "config": [] 00:05:54.113 }, 00:05:54.113 { 00:05:54.113 "subsystem": "iobuf", 00:05:54.113 "config": [ 00:05:54.113 { 00:05:54.113 "method": "iobuf_set_options", 00:05:54.113 "params": { 00:05:54.113 "small_pool_count": 8192, 00:05:54.113 "large_pool_count": 1024, 00:05:54.113 "small_bufsize": 8192, 00:05:54.113 "large_bufsize": 135168 00:05:54.113 } 00:05:54.113 } 00:05:54.113 ] 00:05:54.113 }, 00:05:54.113 { 00:05:54.113 "subsystem": "sock", 00:05:54.113 "config": [ 00:05:54.113 { 00:05:54.113 "method": "sock_set_default_impl", 00:05:54.113 "params": { 00:05:54.113 "impl_name": "posix" 00:05:54.113 } 00:05:54.113 }, 00:05:54.113 { 00:05:54.113 "method": "sock_impl_set_options", 00:05:54.113 "params": { 00:05:54.113 "impl_name": "ssl", 00:05:54.113 "recv_buf_size": 4096, 00:05:54.113 "send_buf_size": 4096, 00:05:54.113 "enable_recv_pipe": true, 00:05:54.113 "enable_quickack": false, 00:05:54.113 "enable_placement_id": 0, 00:05:54.113 "enable_zerocopy_send_server": true, 00:05:54.113 "enable_zerocopy_send_client": false, 00:05:54.113 "zerocopy_threshold": 0, 00:05:54.113 "tls_version": 0, 00:05:54.113 "enable_ktls": false 00:05:54.113 } 00:05:54.113 }, 00:05:54.113 { 00:05:54.113 "method": "sock_impl_set_options", 00:05:54.113 "params": { 00:05:54.113 "impl_name": "posix", 00:05:54.113 "recv_buf_size": 2097152, 00:05:54.113 "send_buf_size": 2097152, 00:05:54.113 "enable_recv_pipe": true, 00:05:54.113 "enable_quickack": false, 00:05:54.113 "enable_placement_id": 0, 00:05:54.113 "enable_zerocopy_send_server": true, 00:05:54.113 "enable_zerocopy_send_client": false, 00:05:54.113 "zerocopy_threshold": 0, 00:05:54.113 "tls_version": 0, 00:05:54.113 "enable_ktls": false 00:05:54.113 } 00:05:54.113 } 00:05:54.113 ] 00:05:54.113 }, 00:05:54.113 { 00:05:54.113 "subsystem": "vmd", 00:05:54.113 "config": [] 00:05:54.113 }, 00:05:54.113 { 00:05:54.113 "subsystem": "accel", 00:05:54.113 "config": [ 00:05:54.113 { 00:05:54.113 "method": "accel_set_options", 00:05:54.113 "params": { 00:05:54.113 "small_cache_size": 128, 00:05:54.113 "large_cache_size": 16, 00:05:54.113 "task_count": 2048, 00:05:54.113 "sequence_count": 2048, 00:05:54.113 "buf_count": 2048 00:05:54.113 } 00:05:54.113 } 00:05:54.113 ] 00:05:54.113 }, 00:05:54.113 { 00:05:54.113 "subsystem": "bdev", 00:05:54.113 "config": [ 00:05:54.113 { 00:05:54.113 "method": "bdev_set_options", 00:05:54.113 "params": { 00:05:54.113 "bdev_io_pool_size": 65535, 00:05:54.113 "bdev_io_cache_size": 256, 00:05:54.113 "bdev_auto_examine": true, 00:05:54.113 "iobuf_small_cache_size": 128, 00:05:54.113 "iobuf_large_cache_size": 16 00:05:54.113 } 00:05:54.113 }, 00:05:54.113 { 00:05:54.113 "method": "bdev_raid_set_options", 00:05:54.113 "params": { 00:05:54.113 "process_window_size_kb": 1024 00:05:54.113 } 00:05:54.113 }, 00:05:54.113 { 00:05:54.113 "method": "bdev_iscsi_set_options", 00:05:54.113 "params": { 00:05:54.113 "timeout_sec": 30 00:05:54.113 } 00:05:54.113 }, 00:05:54.113 { 00:05:54.113 "method": "bdev_nvme_set_options", 00:05:54.113 "params": { 00:05:54.113 "action_on_timeout": "none", 00:05:54.113 "timeout_us": 0, 00:05:54.113 "timeout_admin_us": 0, 00:05:54.113 "keep_alive_timeout_ms": 10000, 00:05:54.113 "arbitration_burst": 0, 00:05:54.113 "low_priority_weight": 0, 00:05:54.113 "medium_priority_weight": 0, 00:05:54.113 "high_priority_weight": 0, 00:05:54.113 "nvme_adminq_poll_period_us": 10000, 00:05:54.113 "nvme_ioq_poll_period_us": 0, 00:05:54.113 "io_queue_requests": 0, 00:05:54.113 "delay_cmd_submit": true, 00:05:54.113 "transport_retry_count": 4, 00:05:54.113 "bdev_retry_count": 3, 00:05:54.113 "transport_ack_timeout": 0, 00:05:54.113 "ctrlr_loss_timeout_sec": 0, 00:05:54.113 "reconnect_delay_sec": 0, 00:05:54.113 "fast_io_fail_timeout_sec": 0, 00:05:54.113 "disable_auto_failback": false, 00:05:54.113 "generate_uuids": false, 00:05:54.113 "transport_tos": 0, 00:05:54.113 "nvme_error_stat": false, 00:05:54.113 "rdma_srq_size": 0, 00:05:54.113 "io_path_stat": false, 00:05:54.113 "allow_accel_sequence": false, 00:05:54.113 "rdma_max_cq_size": 0, 00:05:54.113 "rdma_cm_event_timeout_ms": 0, 00:05:54.113 "dhchap_digests": [ 00:05:54.113 "sha256", 00:05:54.113 "sha384", 00:05:54.113 "sha512" 00:05:54.113 ], 00:05:54.113 "dhchap_dhgroups": [ 00:05:54.113 "null", 00:05:54.113 "ffdhe2048", 00:05:54.113 "ffdhe3072", 00:05:54.113 "ffdhe4096", 00:05:54.113 "ffdhe6144", 00:05:54.113 "ffdhe8192" 00:05:54.113 ] 00:05:54.113 } 00:05:54.113 }, 00:05:54.113 { 00:05:54.113 "method": "bdev_nvme_set_hotplug", 00:05:54.113 "params": { 00:05:54.113 "period_us": 100000, 00:05:54.113 "enable": false 00:05:54.113 } 00:05:54.113 }, 00:05:54.113 { 00:05:54.113 "method": "bdev_wait_for_examine" 00:05:54.113 } 00:05:54.113 ] 00:05:54.113 }, 00:05:54.113 { 00:05:54.113 "subsystem": "scsi", 00:05:54.113 "config": null 00:05:54.113 }, 00:05:54.113 { 00:05:54.113 "subsystem": "scheduler", 00:05:54.113 "config": [ 00:05:54.113 { 00:05:54.113 "method": "framework_set_scheduler", 00:05:54.113 "params": { 00:05:54.113 "name": "static" 00:05:54.113 } 00:05:54.113 } 00:05:54.113 ] 00:05:54.113 }, 00:05:54.113 { 00:05:54.113 "subsystem": "vhost_scsi", 00:05:54.113 "config": [] 00:05:54.113 }, 00:05:54.113 { 00:05:54.113 "subsystem": "vhost_blk", 00:05:54.113 "config": [] 00:05:54.113 }, 00:05:54.113 { 00:05:54.113 "subsystem": "ublk", 00:05:54.113 "config": [] 00:05:54.113 }, 00:05:54.113 { 00:05:54.114 "subsystem": "nbd", 00:05:54.114 "config": [] 00:05:54.114 }, 00:05:54.114 { 00:05:54.114 "subsystem": "nvmf", 00:05:54.114 "config": [ 00:05:54.114 { 00:05:54.114 "method": "nvmf_set_config", 00:05:54.114 "params": { 00:05:54.114 "discovery_filter": "match_any", 00:05:54.114 "admin_cmd_passthru": { 00:05:54.114 "identify_ctrlr": false 00:05:54.114 } 00:05:54.114 } 00:05:54.114 }, 00:05:54.114 { 00:05:54.114 "method": "nvmf_set_max_subsystems", 00:05:54.114 "params": { 00:05:54.114 "max_subsystems": 1024 00:05:54.114 } 00:05:54.114 }, 00:05:54.114 { 00:05:54.114 "method": "nvmf_set_crdt", 00:05:54.114 "params": { 00:05:54.114 "crdt1": 0, 00:05:54.114 "crdt2": 0, 00:05:54.114 "crdt3": 0 00:05:54.114 } 00:05:54.114 }, 00:05:54.114 { 00:05:54.114 "method": "nvmf_create_transport", 00:05:54.114 "params": { 00:05:54.114 "trtype": "TCP", 00:05:54.114 "max_queue_depth": 128, 00:05:54.114 "max_io_qpairs_per_ctrlr": 127, 00:05:54.114 "in_capsule_data_size": 4096, 00:05:54.114 "max_io_size": 131072, 00:05:54.114 "io_unit_size": 131072, 00:05:54.114 "max_aq_depth": 128, 00:05:54.114 "num_shared_buffers": 511, 00:05:54.114 "buf_cache_size": 4294967295, 00:05:54.114 "dif_insert_or_strip": false, 00:05:54.114 "zcopy": false, 00:05:54.114 "c2h_success": true, 00:05:54.114 "sock_priority": 0, 00:05:54.114 "abort_timeout_sec": 1, 00:05:54.114 "ack_timeout": 0, 00:05:54.114 "data_wr_pool_size": 0 00:05:54.114 } 00:05:54.114 } 00:05:54.114 ] 00:05:54.114 }, 00:05:54.114 { 00:05:54.114 "subsystem": "iscsi", 00:05:54.114 "config": [ 00:05:54.114 { 00:05:54.114 "method": "iscsi_set_options", 00:05:54.114 "params": { 00:05:54.114 "node_base": "iqn.2016-06.io.spdk", 00:05:54.114 "max_sessions": 128, 00:05:54.114 "max_connections_per_session": 2, 00:05:54.114 "max_queue_depth": 64, 00:05:54.114 "default_time2wait": 2, 00:05:54.114 "default_time2retain": 20, 00:05:54.114 "first_burst_length": 8192, 00:05:54.114 "immediate_data": true, 00:05:54.114 "allow_duplicated_isid": false, 00:05:54.114 "error_recovery_level": 0, 00:05:54.114 "nop_timeout": 60, 00:05:54.114 "nop_in_interval": 30, 00:05:54.114 "disable_chap": false, 00:05:54.114 "require_chap": false, 00:05:54.114 "mutual_chap": false, 00:05:54.114 "chap_group": 0, 00:05:54.114 "max_large_datain_per_connection": 64, 00:05:54.114 "max_r2t_per_connection": 4, 00:05:54.114 "pdu_pool_size": 36864, 00:05:54.114 "immediate_data_pool_size": 16384, 00:05:54.114 "data_out_pool_size": 2048 00:05:54.114 } 00:05:54.114 } 00:05:54.114 ] 00:05:54.114 } 00:05:54.114 ] 00:05:54.114 } 00:05:54.114 21:05:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:54.114 21:05:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 62260 00:05:54.114 21:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 62260 ']' 00:05:54.114 21:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 62260 00:05:54.114 21:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:54.114 21:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.114 21:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62260 00:05:54.114 killing process with pid 62260 00:05:54.114 21:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.114 21:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.114 21:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62260' 00:05:54.114 21:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 62260 00:05:54.114 21:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 62260 00:05:56.018 21:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=62305 00:05:56.018 21:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:56.018 21:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:01.284 21:05:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 62305 00:06:01.284 21:05:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 62305 ']' 00:06:01.284 21:05:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 62305 00:06:01.284 21:05:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:01.284 21:05:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.284 21:05:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62305 00:06:01.284 21:05:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.284 21:05:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.284 killing process with pid 62305 00:06:01.284 21:05:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62305' 00:06:01.284 21:05:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 62305 00:06:01.284 21:05:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 62305 00:06:03.188 21:05:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:03.188 21:05:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:03.188 00:06:03.188 real 0m10.068s 00:06:03.188 user 0m9.755s 00:06:03.188 sys 0m0.670s 00:06:03.188 21:05:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.188 21:05:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:03.188 ************************************ 00:06:03.188 END TEST skip_rpc_with_json 00:06:03.188 ************************************ 00:06:03.188 21:05:14 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:03.188 21:05:14 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:03.188 21:05:14 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.188 21:05:14 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.188 21:05:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.188 ************************************ 00:06:03.188 START TEST skip_rpc_with_delay 00:06:03.188 ************************************ 00:06:03.188 21:05:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:03.188 21:05:14 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:03.188 21:05:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:03.188 21:05:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:03.188 21:05:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.188 21:05:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.188 21:05:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.188 21:05:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.188 21:05:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.188 21:05:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.188 21:05:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.188 21:05:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:03.188 21:05:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:03.188 [2024-07-14 21:05:14.506650] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:03.188 [2024-07-14 21:05:14.506902] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:03.188 21:05:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:03.188 21:05:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:03.188 21:05:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:03.188 21:05:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:03.188 00:06:03.188 real 0m0.190s 00:06:03.188 user 0m0.103s 00:06:03.188 sys 0m0.084s 00:06:03.188 21:05:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.188 21:05:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:03.188 ************************************ 00:06:03.188 END TEST skip_rpc_with_delay 00:06:03.188 ************************************ 00:06:03.188 21:05:14 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:03.188 21:05:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:03.188 21:05:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:03.188 21:05:14 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:03.188 21:05:14 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.188 21:05:14 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.188 21:05:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.188 ************************************ 00:06:03.188 START TEST exit_on_failed_rpc_init 00:06:03.188 ************************************ 00:06:03.188 21:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:03.188 21:05:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=62439 00:06:03.188 21:05:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.188 21:05:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 62439 00:06:03.188 21:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 62439 ']' 00:06:03.188 21:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.188 21:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.188 21:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.188 21:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.188 21:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:03.447 [2024-07-14 21:05:14.750973] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:03.447 [2024-07-14 21:05:14.751148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62439 ] 00:06:03.447 [2024-07-14 21:05:14.918167] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.706 [2024-07-14 21:05:15.073754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.273 21:05:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.273 21:05:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:04.273 21:05:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.273 21:05:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:04.273 21:05:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:04.273 21:05:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:04.273 21:05:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.273 21:05:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.273 21:05:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.273 21:05:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.274 21:05:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.274 21:05:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.274 21:05:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.274 21:05:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:04.274 21:05:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:04.274 [2024-07-14 21:05:15.809955] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:04.274 [2024-07-14 21:05:15.810153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62457 ] 00:06:04.541 [2024-07-14 21:05:15.986589] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.807 [2024-07-14 21:05:16.194071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.807 [2024-07-14 21:05:16.194213] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:04.807 [2024-07-14 21:05:16.194246] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:04.807 [2024-07-14 21:05:16.194270] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:05.065 21:05:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:05.065 21:05:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:05.065 21:05:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:05.065 21:05:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:05.065 21:05:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:05.065 21:05:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:05.065 21:05:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:05.065 21:05:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 62439 00:06:05.065 21:05:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 62439 ']' 00:06:05.065 21:05:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 62439 00:06:05.065 21:05:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:05.065 21:05:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:05.065 21:05:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62439 00:06:05.065 killing process with pid 62439 00:06:05.065 21:05:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:05.065 21:05:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:05.065 21:05:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62439' 00:06:05.065 21:05:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 62439 00:06:05.065 21:05:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 62439 00:06:06.970 00:06:06.970 real 0m3.791s 00:06:06.970 user 0m4.508s 00:06:06.970 sys 0m0.485s 00:06:06.970 21:05:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.970 21:05:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:06.970 ************************************ 00:06:06.970 END TEST exit_on_failed_rpc_init 00:06:06.970 ************************************ 00:06:06.970 21:05:18 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:06.970 21:05:18 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:06.970 ************************************ 00:06:06.970 END TEST skip_rpc 00:06:06.970 ************************************ 00:06:06.970 00:06:06.970 real 0m21.132s 00:06:06.970 user 0m20.857s 00:06:06.970 sys 0m1.710s 00:06:06.970 21:05:18 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.970 21:05:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.970 21:05:18 -- common/autotest_common.sh@1142 -- # return 0 00:06:06.970 21:05:18 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:06.970 21:05:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.970 21:05:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.970 21:05:18 -- common/autotest_common.sh@10 -- # set +x 00:06:06.970 ************************************ 00:06:06.970 START TEST rpc_client 00:06:06.970 ************************************ 00:06:06.970 21:05:18 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:07.229 * Looking for test storage... 00:06:07.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:07.229 21:05:18 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:07.229 OK 00:06:07.229 21:05:18 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:07.229 00:06:07.229 real 0m0.144s 00:06:07.229 user 0m0.073s 00:06:07.229 sys 0m0.077s 00:06:07.229 21:05:18 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.229 ************************************ 00:06:07.229 21:05:18 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:07.229 END TEST rpc_client 00:06:07.229 ************************************ 00:06:07.229 21:05:18 -- common/autotest_common.sh@1142 -- # return 0 00:06:07.229 21:05:18 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:07.229 21:05:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.229 21:05:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.229 21:05:18 -- common/autotest_common.sh@10 -- # set +x 00:06:07.229 ************************************ 00:06:07.229 START TEST json_config 00:06:07.229 ************************************ 00:06:07.229 21:05:18 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:07.229 21:05:18 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:98373986-152d-4edd-b0f9-b4d926b76024 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=98373986-152d-4edd-b0f9-b4d926b76024 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:07.229 21:05:18 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:07.229 21:05:18 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:07.229 21:05:18 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:07.229 21:05:18 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.229 21:05:18 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.229 21:05:18 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.229 21:05:18 json_config -- paths/export.sh@5 -- # export PATH 00:06:07.229 21:05:18 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@47 -- # : 0 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:07.229 21:05:18 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:07.229 21:05:18 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:07.229 21:05:18 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:07.229 21:05:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:07.229 21:05:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:07.229 21:05:18 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:07.229 WARNING: No tests are enabled so not running JSON configuration tests 00:06:07.229 21:05:18 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:07.229 21:05:18 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:07.229 00:06:07.229 real 0m0.074s 00:06:07.229 user 0m0.043s 00:06:07.229 sys 0m0.032s 00:06:07.229 21:05:18 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.229 21:05:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.229 ************************************ 00:06:07.229 END TEST json_config 00:06:07.229 ************************************ 00:06:07.488 21:05:18 -- common/autotest_common.sh@1142 -- # return 0 00:06:07.488 21:05:18 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:07.488 21:05:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.488 21:05:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.488 21:05:18 -- common/autotest_common.sh@10 -- # set +x 00:06:07.488 ************************************ 00:06:07.488 START TEST json_config_extra_key 00:06:07.488 ************************************ 00:06:07.488 21:05:18 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:07.488 21:05:18 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:07.488 21:05:18 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:07.488 21:05:18 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:07.488 21:05:18 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:07.488 21:05:18 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:07.488 21:05:18 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:07.488 21:05:18 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:07.488 21:05:18 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:07.488 21:05:18 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:07.488 21:05:18 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:07.488 21:05:18 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:07.488 21:05:18 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:07.488 21:05:18 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:98373986-152d-4edd-b0f9-b4d926b76024 00:06:07.488 21:05:18 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=98373986-152d-4edd-b0f9-b4d926b76024 00:06:07.488 21:05:18 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:07.488 21:05:18 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:07.488 21:05:18 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:07.488 21:05:18 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:07.488 21:05:18 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:07.488 21:05:18 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:07.488 21:05:18 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:07.489 21:05:18 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:07.489 21:05:18 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.489 21:05:18 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.489 21:05:18 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.489 21:05:18 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:07.489 21:05:18 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.489 21:05:18 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:07.489 21:05:18 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:07.489 21:05:18 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:07.489 21:05:18 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:07.489 21:05:18 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:07.489 21:05:18 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:07.489 21:05:18 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:07.489 21:05:18 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:07.489 21:05:18 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:07.489 21:05:18 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:07.489 21:05:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:07.489 21:05:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:07.489 21:05:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:07.489 21:05:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:07.489 21:05:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:07.489 21:05:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:07.489 21:05:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:07.489 21:05:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:07.489 21:05:18 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:07.489 INFO: launching applications... 00:06:07.489 21:05:18 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:07.489 21:05:18 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:07.489 21:05:18 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:07.489 21:05:18 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:07.489 21:05:18 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:07.489 21:05:18 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:07.489 21:05:18 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:07.489 21:05:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:07.489 21:05:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:07.489 21:05:18 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=62632 00:06:07.489 Waiting for target to run... 00:06:07.489 21:05:18 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:07.489 21:05:18 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 62632 /var/tmp/spdk_tgt.sock 00:06:07.489 21:05:18 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:07.489 21:05:18 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 62632 ']' 00:06:07.489 21:05:18 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:07.489 21:05:18 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:07.489 21:05:18 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:07.489 21:05:18 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.489 21:05:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:07.489 [2024-07-14 21:05:18.992233] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:07.489 [2024-07-14 21:05:18.992415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62632 ] 00:06:08.059 [2024-07-14 21:05:19.299733] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.059 [2024-07-14 21:05:19.460533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.633 21:05:20 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.634 00:06:08.634 21:05:20 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:08.634 21:05:20 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:08.634 INFO: shutting down applications... 00:06:08.634 21:05:20 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:08.634 21:05:20 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:08.634 21:05:20 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:08.634 21:05:20 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:08.634 21:05:20 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 62632 ]] 00:06:08.634 21:05:20 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 62632 00:06:08.634 21:05:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:08.634 21:05:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.634 21:05:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62632 00:06:08.634 21:05:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:09.201 21:05:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:09.201 21:05:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.201 21:05:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62632 00:06:09.201 21:05:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:09.769 21:05:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:09.769 21:05:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.769 21:05:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62632 00:06:09.769 21:05:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:10.027 21:05:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:10.027 21:05:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.027 21:05:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62632 00:06:10.027 21:05:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:10.595 21:05:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:10.595 21:05:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.595 21:05:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62632 00:06:10.595 21:05:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:11.163 21:05:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:11.163 21:05:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.163 21:05:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62632 00:06:11.163 21:05:22 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:11.163 21:05:22 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:11.163 21:05:22 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:11.163 SPDK target shutdown done 00:06:11.163 21:05:22 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:11.163 Success 00:06:11.163 21:05:22 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:11.163 00:06:11.163 real 0m3.737s 00:06:11.163 user 0m3.286s 00:06:11.163 sys 0m0.424s 00:06:11.163 21:05:22 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.163 21:05:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:11.163 ************************************ 00:06:11.163 END TEST json_config_extra_key 00:06:11.163 ************************************ 00:06:11.163 21:05:22 -- common/autotest_common.sh@1142 -- # return 0 00:06:11.163 21:05:22 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:11.163 21:05:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.163 21:05:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.163 21:05:22 -- common/autotest_common.sh@10 -- # set +x 00:06:11.163 ************************************ 00:06:11.163 START TEST alias_rpc 00:06:11.163 ************************************ 00:06:11.163 21:05:22 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:11.163 * Looking for test storage... 00:06:11.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:11.163 21:05:22 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:11.163 21:05:22 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=62723 00:06:11.163 21:05:22 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 62723 00:06:11.163 21:05:22 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 62723 ']' 00:06:11.163 21:05:22 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:11.163 21:05:22 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.163 21:05:22 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.163 21:05:22 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.163 21:05:22 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.163 21:05:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.422 [2024-07-14 21:05:22.800528] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:11.422 [2024-07-14 21:05:22.800793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62723 ] 00:06:11.681 [2024-07-14 21:05:22.971873] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.681 [2024-07-14 21:05:23.127962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.248 21:05:23 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.248 21:05:23 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:12.248 21:05:23 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:12.507 21:05:23 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 62723 00:06:12.507 21:05:23 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 62723 ']' 00:06:12.507 21:05:23 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 62723 00:06:12.507 21:05:23 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:12.507 21:05:24 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.507 21:05:24 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62723 00:06:12.507 21:05:24 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.507 21:05:24 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.507 killing process with pid 62723 00:06:12.507 21:05:24 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62723' 00:06:12.507 21:05:24 alias_rpc -- common/autotest_common.sh@967 -- # kill 62723 00:06:12.507 21:05:24 alias_rpc -- common/autotest_common.sh@972 -- # wait 62723 00:06:14.412 00:06:14.412 real 0m3.321s 00:06:14.412 user 0m3.543s 00:06:14.412 sys 0m0.412s 00:06:14.412 21:05:25 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.412 21:05:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.412 ************************************ 00:06:14.412 END TEST alias_rpc 00:06:14.412 ************************************ 00:06:14.671 21:05:25 -- common/autotest_common.sh@1142 -- # return 0 00:06:14.671 21:05:25 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:14.671 21:05:25 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:14.671 21:05:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.671 21:05:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.671 21:05:25 -- common/autotest_common.sh@10 -- # set +x 00:06:14.671 ************************************ 00:06:14.671 START TEST spdkcli_tcp 00:06:14.671 ************************************ 00:06:14.671 21:05:25 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:14.671 * Looking for test storage... 00:06:14.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:14.671 21:05:26 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:14.671 21:05:26 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:14.671 21:05:26 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:14.671 21:05:26 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:14.671 21:05:26 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:14.671 21:05:26 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:14.671 21:05:26 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:14.671 21:05:26 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:14.671 21:05:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.671 21:05:26 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=62817 00:06:14.671 21:05:26 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 62817 00:06:14.671 21:05:26 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 62817 ']' 00:06:14.671 21:05:26 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:14.671 21:05:26 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.671 21:05:26 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.671 21:05:26 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.671 21:05:26 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.671 21:05:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.671 [2024-07-14 21:05:26.184169] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:14.671 [2024-07-14 21:05:26.184878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62817 ] 00:06:14.930 [2024-07-14 21:05:26.356252] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.190 [2024-07-14 21:05:26.514149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.190 [2024-07-14 21:05:26.514165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.758 21:05:27 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.758 21:05:27 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:15.758 21:05:27 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=62834 00:06:15.758 21:05:27 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:15.758 21:05:27 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:16.017 [ 00:06:16.017 "bdev_malloc_delete", 00:06:16.017 "bdev_malloc_create", 00:06:16.017 "bdev_null_resize", 00:06:16.017 "bdev_null_delete", 00:06:16.017 "bdev_null_create", 00:06:16.017 "bdev_nvme_cuse_unregister", 00:06:16.017 "bdev_nvme_cuse_register", 00:06:16.017 "bdev_opal_new_user", 00:06:16.017 "bdev_opal_set_lock_state", 00:06:16.017 "bdev_opal_delete", 00:06:16.017 "bdev_opal_get_info", 00:06:16.017 "bdev_opal_create", 00:06:16.017 "bdev_nvme_opal_revert", 00:06:16.017 "bdev_nvme_opal_init", 00:06:16.017 "bdev_nvme_send_cmd", 00:06:16.017 "bdev_nvme_get_path_iostat", 00:06:16.017 "bdev_nvme_get_mdns_discovery_info", 00:06:16.017 "bdev_nvme_stop_mdns_discovery", 00:06:16.017 "bdev_nvme_start_mdns_discovery", 00:06:16.017 "bdev_nvme_set_multipath_policy", 00:06:16.017 "bdev_nvme_set_preferred_path", 00:06:16.017 "bdev_nvme_get_io_paths", 00:06:16.017 "bdev_nvme_remove_error_injection", 00:06:16.017 "bdev_nvme_add_error_injection", 00:06:16.017 "bdev_nvme_get_discovery_info", 00:06:16.017 "bdev_nvme_stop_discovery", 00:06:16.017 "bdev_nvme_start_discovery", 00:06:16.017 "bdev_nvme_get_controller_health_info", 00:06:16.017 "bdev_nvme_disable_controller", 00:06:16.017 "bdev_nvme_enable_controller", 00:06:16.017 "bdev_nvme_reset_controller", 00:06:16.017 "bdev_nvme_get_transport_statistics", 00:06:16.017 "bdev_nvme_apply_firmware", 00:06:16.017 "bdev_nvme_detach_controller", 00:06:16.017 "bdev_nvme_get_controllers", 00:06:16.017 "bdev_nvme_attach_controller", 00:06:16.017 "bdev_nvme_set_hotplug", 00:06:16.017 "bdev_nvme_set_options", 00:06:16.017 "bdev_passthru_delete", 00:06:16.017 "bdev_passthru_create", 00:06:16.017 "bdev_lvol_set_parent_bdev", 00:06:16.017 "bdev_lvol_set_parent", 00:06:16.017 "bdev_lvol_check_shallow_copy", 00:06:16.017 "bdev_lvol_start_shallow_copy", 00:06:16.017 "bdev_lvol_grow_lvstore", 00:06:16.017 "bdev_lvol_get_lvols", 00:06:16.017 "bdev_lvol_get_lvstores", 00:06:16.017 "bdev_lvol_delete", 00:06:16.017 "bdev_lvol_set_read_only", 00:06:16.017 "bdev_lvol_resize", 00:06:16.017 "bdev_lvol_decouple_parent", 00:06:16.017 "bdev_lvol_inflate", 00:06:16.017 "bdev_lvol_rename", 00:06:16.017 "bdev_lvol_clone_bdev", 00:06:16.017 "bdev_lvol_clone", 00:06:16.017 "bdev_lvol_snapshot", 00:06:16.017 "bdev_lvol_create", 00:06:16.017 "bdev_lvol_delete_lvstore", 00:06:16.017 "bdev_lvol_rename_lvstore", 00:06:16.017 "bdev_lvol_create_lvstore", 00:06:16.017 "bdev_raid_set_options", 00:06:16.017 "bdev_raid_remove_base_bdev", 00:06:16.017 "bdev_raid_add_base_bdev", 00:06:16.017 "bdev_raid_delete", 00:06:16.018 "bdev_raid_create", 00:06:16.018 "bdev_raid_get_bdevs", 00:06:16.018 "bdev_error_inject_error", 00:06:16.018 "bdev_error_delete", 00:06:16.018 "bdev_error_create", 00:06:16.018 "bdev_split_delete", 00:06:16.018 "bdev_split_create", 00:06:16.018 "bdev_delay_delete", 00:06:16.018 "bdev_delay_create", 00:06:16.018 "bdev_delay_update_latency", 00:06:16.018 "bdev_zone_block_delete", 00:06:16.018 "bdev_zone_block_create", 00:06:16.018 "blobfs_create", 00:06:16.018 "blobfs_detect", 00:06:16.018 "blobfs_set_cache_size", 00:06:16.018 "bdev_xnvme_delete", 00:06:16.018 "bdev_xnvme_create", 00:06:16.018 "bdev_aio_delete", 00:06:16.018 "bdev_aio_rescan", 00:06:16.018 "bdev_aio_create", 00:06:16.018 "bdev_ftl_set_property", 00:06:16.018 "bdev_ftl_get_properties", 00:06:16.018 "bdev_ftl_get_stats", 00:06:16.018 "bdev_ftl_unmap", 00:06:16.018 "bdev_ftl_unload", 00:06:16.018 "bdev_ftl_delete", 00:06:16.018 "bdev_ftl_load", 00:06:16.018 "bdev_ftl_create", 00:06:16.018 "bdev_virtio_attach_controller", 00:06:16.018 "bdev_virtio_scsi_get_devices", 00:06:16.018 "bdev_virtio_detach_controller", 00:06:16.018 "bdev_virtio_blk_set_hotplug", 00:06:16.018 "bdev_iscsi_delete", 00:06:16.018 "bdev_iscsi_create", 00:06:16.018 "bdev_iscsi_set_options", 00:06:16.018 "accel_error_inject_error", 00:06:16.018 "ioat_scan_accel_module", 00:06:16.018 "dsa_scan_accel_module", 00:06:16.018 "iaa_scan_accel_module", 00:06:16.018 "keyring_file_remove_key", 00:06:16.018 "keyring_file_add_key", 00:06:16.018 "keyring_linux_set_options", 00:06:16.018 "iscsi_get_histogram", 00:06:16.018 "iscsi_enable_histogram", 00:06:16.018 "iscsi_set_options", 00:06:16.018 "iscsi_get_auth_groups", 00:06:16.018 "iscsi_auth_group_remove_secret", 00:06:16.018 "iscsi_auth_group_add_secret", 00:06:16.018 "iscsi_delete_auth_group", 00:06:16.018 "iscsi_create_auth_group", 00:06:16.018 "iscsi_set_discovery_auth", 00:06:16.018 "iscsi_get_options", 00:06:16.018 "iscsi_target_node_request_logout", 00:06:16.018 "iscsi_target_node_set_redirect", 00:06:16.018 "iscsi_target_node_set_auth", 00:06:16.018 "iscsi_target_node_add_lun", 00:06:16.018 "iscsi_get_stats", 00:06:16.018 "iscsi_get_connections", 00:06:16.018 "iscsi_portal_group_set_auth", 00:06:16.018 "iscsi_start_portal_group", 00:06:16.018 "iscsi_delete_portal_group", 00:06:16.018 "iscsi_create_portal_group", 00:06:16.018 "iscsi_get_portal_groups", 00:06:16.018 "iscsi_delete_target_node", 00:06:16.018 "iscsi_target_node_remove_pg_ig_maps", 00:06:16.018 "iscsi_target_node_add_pg_ig_maps", 00:06:16.018 "iscsi_create_target_node", 00:06:16.018 "iscsi_get_target_nodes", 00:06:16.018 "iscsi_delete_initiator_group", 00:06:16.018 "iscsi_initiator_group_remove_initiators", 00:06:16.018 "iscsi_initiator_group_add_initiators", 00:06:16.018 "iscsi_create_initiator_group", 00:06:16.018 "iscsi_get_initiator_groups", 00:06:16.018 "nvmf_set_crdt", 00:06:16.018 "nvmf_set_config", 00:06:16.018 "nvmf_set_max_subsystems", 00:06:16.018 "nvmf_stop_mdns_prr", 00:06:16.018 "nvmf_publish_mdns_prr", 00:06:16.018 "nvmf_subsystem_get_listeners", 00:06:16.018 "nvmf_subsystem_get_qpairs", 00:06:16.018 "nvmf_subsystem_get_controllers", 00:06:16.018 "nvmf_get_stats", 00:06:16.018 "nvmf_get_transports", 00:06:16.018 "nvmf_create_transport", 00:06:16.018 "nvmf_get_targets", 00:06:16.018 "nvmf_delete_target", 00:06:16.018 "nvmf_create_target", 00:06:16.018 "nvmf_subsystem_allow_any_host", 00:06:16.018 "nvmf_subsystem_remove_host", 00:06:16.018 "nvmf_subsystem_add_host", 00:06:16.018 "nvmf_ns_remove_host", 00:06:16.018 "nvmf_ns_add_host", 00:06:16.018 "nvmf_subsystem_remove_ns", 00:06:16.018 "nvmf_subsystem_add_ns", 00:06:16.018 "nvmf_subsystem_listener_set_ana_state", 00:06:16.018 "nvmf_discovery_get_referrals", 00:06:16.018 "nvmf_discovery_remove_referral", 00:06:16.018 "nvmf_discovery_add_referral", 00:06:16.018 "nvmf_subsystem_remove_listener", 00:06:16.018 "nvmf_subsystem_add_listener", 00:06:16.018 "nvmf_delete_subsystem", 00:06:16.018 "nvmf_create_subsystem", 00:06:16.018 "nvmf_get_subsystems", 00:06:16.018 "env_dpdk_get_mem_stats", 00:06:16.018 "nbd_get_disks", 00:06:16.018 "nbd_stop_disk", 00:06:16.018 "nbd_start_disk", 00:06:16.018 "ublk_recover_disk", 00:06:16.018 "ublk_get_disks", 00:06:16.018 "ublk_stop_disk", 00:06:16.018 "ublk_start_disk", 00:06:16.018 "ublk_destroy_target", 00:06:16.018 "ublk_create_target", 00:06:16.018 "virtio_blk_create_transport", 00:06:16.018 "virtio_blk_get_transports", 00:06:16.018 "vhost_controller_set_coalescing", 00:06:16.018 "vhost_get_controllers", 00:06:16.018 "vhost_delete_controller", 00:06:16.018 "vhost_create_blk_controller", 00:06:16.018 "vhost_scsi_controller_remove_target", 00:06:16.018 "vhost_scsi_controller_add_target", 00:06:16.018 "vhost_start_scsi_controller", 00:06:16.018 "vhost_create_scsi_controller", 00:06:16.018 "thread_set_cpumask", 00:06:16.018 "framework_get_governor", 00:06:16.018 "framework_get_scheduler", 00:06:16.018 "framework_set_scheduler", 00:06:16.018 "framework_get_reactors", 00:06:16.018 "thread_get_io_channels", 00:06:16.018 "thread_get_pollers", 00:06:16.018 "thread_get_stats", 00:06:16.018 "framework_monitor_context_switch", 00:06:16.018 "spdk_kill_instance", 00:06:16.018 "log_enable_timestamps", 00:06:16.018 "log_get_flags", 00:06:16.018 "log_clear_flag", 00:06:16.018 "log_set_flag", 00:06:16.018 "log_get_level", 00:06:16.018 "log_set_level", 00:06:16.018 "log_get_print_level", 00:06:16.018 "log_set_print_level", 00:06:16.018 "framework_enable_cpumask_locks", 00:06:16.018 "framework_disable_cpumask_locks", 00:06:16.018 "framework_wait_init", 00:06:16.018 "framework_start_init", 00:06:16.018 "scsi_get_devices", 00:06:16.018 "bdev_get_histogram", 00:06:16.018 "bdev_enable_histogram", 00:06:16.018 "bdev_set_qos_limit", 00:06:16.018 "bdev_set_qd_sampling_period", 00:06:16.018 "bdev_get_bdevs", 00:06:16.018 "bdev_reset_iostat", 00:06:16.018 "bdev_get_iostat", 00:06:16.018 "bdev_examine", 00:06:16.018 "bdev_wait_for_examine", 00:06:16.018 "bdev_set_options", 00:06:16.018 "notify_get_notifications", 00:06:16.018 "notify_get_types", 00:06:16.018 "accel_get_stats", 00:06:16.018 "accel_set_options", 00:06:16.018 "accel_set_driver", 00:06:16.018 "accel_crypto_key_destroy", 00:06:16.018 "accel_crypto_keys_get", 00:06:16.018 "accel_crypto_key_create", 00:06:16.018 "accel_assign_opc", 00:06:16.018 "accel_get_module_info", 00:06:16.018 "accel_get_opc_assignments", 00:06:16.018 "vmd_rescan", 00:06:16.018 "vmd_remove_device", 00:06:16.018 "vmd_enable", 00:06:16.018 "sock_get_default_impl", 00:06:16.018 "sock_set_default_impl", 00:06:16.018 "sock_impl_set_options", 00:06:16.018 "sock_impl_get_options", 00:06:16.018 "iobuf_get_stats", 00:06:16.018 "iobuf_set_options", 00:06:16.018 "framework_get_pci_devices", 00:06:16.018 "framework_get_config", 00:06:16.018 "framework_get_subsystems", 00:06:16.018 "trace_get_info", 00:06:16.018 "trace_get_tpoint_group_mask", 00:06:16.018 "trace_disable_tpoint_group", 00:06:16.018 "trace_enable_tpoint_group", 00:06:16.018 "trace_clear_tpoint_mask", 00:06:16.018 "trace_set_tpoint_mask", 00:06:16.018 "keyring_get_keys", 00:06:16.018 "spdk_get_version", 00:06:16.018 "rpc_get_methods" 00:06:16.018 ] 00:06:16.018 21:05:27 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:16.018 21:05:27 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:16.018 21:05:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.018 21:05:27 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:16.018 21:05:27 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 62817 00:06:16.018 21:05:27 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 62817 ']' 00:06:16.018 21:05:27 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 62817 00:06:16.018 21:05:27 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:16.018 21:05:27 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.018 21:05:27 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62817 00:06:16.018 21:05:27 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:16.018 21:05:27 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:16.018 killing process with pid 62817 00:06:16.018 21:05:27 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62817' 00:06:16.018 21:05:27 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 62817 00:06:16.018 21:05:27 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 62817 00:06:17.923 00:06:17.923 real 0m3.345s 00:06:17.923 user 0m5.949s 00:06:17.923 sys 0m0.477s 00:06:17.923 21:05:29 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.923 ************************************ 00:06:17.923 21:05:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.924 END TEST spdkcli_tcp 00:06:17.924 ************************************ 00:06:17.924 21:05:29 -- common/autotest_common.sh@1142 -- # return 0 00:06:17.924 21:05:29 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:17.924 21:05:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.924 21:05:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.924 21:05:29 -- common/autotest_common.sh@10 -- # set +x 00:06:17.924 ************************************ 00:06:17.924 START TEST dpdk_mem_utility 00:06:17.924 ************************************ 00:06:17.924 21:05:29 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:17.924 * Looking for test storage... 00:06:17.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:17.924 21:05:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:17.924 21:05:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=62920 00:06:17.924 21:05:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 62920 00:06:17.924 21:05:29 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 62920 ']' 00:06:17.924 21:05:29 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.924 21:05:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:17.924 21:05:29 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.924 21:05:29 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.924 21:05:29 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.924 21:05:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:18.182 [2024-07-14 21:05:29.563436] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:18.182 [2024-07-14 21:05:29.563644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62920 ] 00:06:18.442 [2024-07-14 21:05:29.731682] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.442 [2024-07-14 21:05:29.893493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.009 21:05:30 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.009 21:05:30 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:19.009 21:05:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:19.009 21:05:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:19.009 21:05:30 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.009 21:05:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:19.009 { 00:06:19.009 "filename": "/tmp/spdk_mem_dump.txt" 00:06:19.009 } 00:06:19.009 21:05:30 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.009 21:05:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:19.269 DPDK memory size 820.000000 MiB in 1 heap(s) 00:06:19.269 1 heaps totaling size 820.000000 MiB 00:06:19.269 size: 820.000000 MiB heap id: 0 00:06:19.269 end heaps---------- 00:06:19.269 8 mempools totaling size 598.116089 MiB 00:06:19.269 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:19.269 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:19.269 size: 84.521057 MiB name: bdev_io_62920 00:06:19.269 size: 51.011292 MiB name: evtpool_62920 00:06:19.269 size: 50.003479 MiB name: msgpool_62920 00:06:19.269 size: 21.763794 MiB name: PDU_Pool 00:06:19.269 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:19.269 size: 0.026123 MiB name: Session_Pool 00:06:19.269 end mempools------- 00:06:19.269 6 memzones totaling size 4.142822 MiB 00:06:19.269 size: 1.000366 MiB name: RG_ring_0_62920 00:06:19.269 size: 1.000366 MiB name: RG_ring_1_62920 00:06:19.269 size: 1.000366 MiB name: RG_ring_4_62920 00:06:19.269 size: 1.000366 MiB name: RG_ring_5_62920 00:06:19.269 size: 0.125366 MiB name: RG_ring_2_62920 00:06:19.269 size: 0.015991 MiB name: RG_ring_3_62920 00:06:19.269 end memzones------- 00:06:19.269 21:05:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:19.269 heap id: 0 total size: 820.000000 MiB number of busy elements: 301 number of free elements: 18 00:06:19.269 list of free elements. size: 18.451294 MiB 00:06:19.269 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:19.269 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:19.269 element at address: 0x200007000000 with size: 1.995972 MiB 00:06:19.269 element at address: 0x20000b200000 with size: 1.995972 MiB 00:06:19.269 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:19.269 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:19.269 element at address: 0x200019600000 with size: 0.999084 MiB 00:06:19.269 element at address: 0x200003e00000 with size: 0.996094 MiB 00:06:19.269 element at address: 0x200032200000 with size: 0.994324 MiB 00:06:19.269 element at address: 0x200018e00000 with size: 0.959656 MiB 00:06:19.269 element at address: 0x200019900040 with size: 0.936401 MiB 00:06:19.269 element at address: 0x200000200000 with size: 0.829956 MiB 00:06:19.269 element at address: 0x20001b000000 with size: 0.564148 MiB 00:06:19.269 element at address: 0x200019200000 with size: 0.487976 MiB 00:06:19.269 element at address: 0x200019a00000 with size: 0.485413 MiB 00:06:19.269 element at address: 0x200013800000 with size: 0.467651 MiB 00:06:19.269 element at address: 0x200028400000 with size: 0.390442 MiB 00:06:19.269 element at address: 0x200003a00000 with size: 0.351990 MiB 00:06:19.269 list of standard malloc elements. size: 199.284302 MiB 00:06:19.269 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:06:19.269 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:06:19.269 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:19.269 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:19.269 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:19.269 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:19.269 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:06:19.269 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:19.269 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:06:19.269 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:06:19.269 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:06:19.269 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:19.269 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:06:19.269 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:06:19.269 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:06:19.269 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:06:19.269 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:06:19.269 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:06:19.269 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:06:19.269 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:06:19.269 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:06:19.269 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:06:19.269 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:06:19.269 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:06:19.269 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:06:19.269 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:06:19.269 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:06:19.269 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:06:19.269 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:06:19.269 element at address: 0x200003aff980 with size: 0.000244 MiB 00:06:19.269 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:19.269 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:19.269 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:06:19.269 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:06:19.269 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:06:19.269 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:06:19.269 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:06:19.269 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:06:19.269 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:06:19.269 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:06:19.269 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:06:19.269 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:06:19.269 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:06:19.269 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:06:19.269 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:06:19.269 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:06:19.269 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:06:19.270 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:06:19.270 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:06:19.270 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:06:19.270 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:06:19.270 element at address: 0x200013877b80 with size: 0.000244 MiB 00:06:19.270 element at address: 0x200013877c80 with size: 0.000244 MiB 00:06:19.270 element at address: 0x200013877d80 with size: 0.000244 MiB 00:06:19.270 element at address: 0x200013877e80 with size: 0.000244 MiB 00:06:19.270 element at address: 0x200013877f80 with size: 0.000244 MiB 00:06:19.270 element at address: 0x200013878080 with size: 0.000244 MiB 00:06:19.270 element at address: 0x200013878180 with size: 0.000244 MiB 00:06:19.270 element at address: 0x200013878280 with size: 0.000244 MiB 00:06:19.270 element at address: 0x200013878380 with size: 0.000244 MiB 00:06:19.270 element at address: 0x200013878480 with size: 0.000244 MiB 00:06:19.270 element at address: 0x200013878580 with size: 0.000244 MiB 00:06:19.270 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:19.270 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:06:19.270 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x200019abc680 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:06:19.270 element at address: 0x200028463f40 with size: 0.000244 MiB 00:06:19.270 element at address: 0x200028464040 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20002846af80 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20002846b080 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20002846b180 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20002846b280 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20002846b380 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20002846b480 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20002846b580 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20002846b680 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20002846b780 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20002846b880 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20002846b980 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20002846be80 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20002846c080 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20002846c180 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20002846c280 with size: 0.000244 MiB 00:06:19.270 element at address: 0x20002846c380 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846c480 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846c580 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846c680 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846c780 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846c880 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846c980 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846d080 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846d180 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846d280 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846d380 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846d480 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846d580 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846d680 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846d780 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846d880 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846d980 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846da80 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846db80 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846de80 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846df80 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846e080 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846e180 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846e280 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846e380 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846e480 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846e580 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846e680 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846e780 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846e880 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846e980 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846f080 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846f180 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846f280 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846f380 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846f480 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846f580 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846f680 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846f780 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846f880 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846f980 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:06:19.271 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:06:19.271 list of memzone associated elements. size: 602.264404 MiB 00:06:19.271 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:06:19.271 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:19.271 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:06:19.271 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:19.271 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:06:19.271 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_62920_0 00:06:19.271 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:19.271 associated memzone info: size: 48.002930 MiB name: MP_evtpool_62920_0 00:06:19.271 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:19.271 associated memzone info: size: 48.002930 MiB name: MP_msgpool_62920_0 00:06:19.271 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:06:19.271 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:19.271 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:06:19.271 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:19.271 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:19.271 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_62920 00:06:19.271 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:19.271 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_62920 00:06:19.271 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:19.271 associated memzone info: size: 1.007996 MiB name: MP_evtpool_62920 00:06:19.271 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:19.271 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:19.271 element at address: 0x200019abc780 with size: 1.008179 MiB 00:06:19.271 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:19.271 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:19.271 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:19.271 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:06:19.271 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:19.271 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:19.271 associated memzone info: size: 1.000366 MiB name: RG_ring_0_62920 00:06:19.271 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:19.271 associated memzone info: size: 1.000366 MiB name: RG_ring_1_62920 00:06:19.271 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:06:19.271 associated memzone info: size: 1.000366 MiB name: RG_ring_4_62920 00:06:19.271 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:06:19.271 associated memzone info: size: 1.000366 MiB name: RG_ring_5_62920 00:06:19.271 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:06:19.271 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_62920 00:06:19.271 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:06:19.271 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:19.271 element at address: 0x200013878680 with size: 0.500549 MiB 00:06:19.271 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:19.271 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:06:19.271 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:19.271 element at address: 0x200003adf740 with size: 0.125549 MiB 00:06:19.271 associated memzone info: size: 0.125366 MiB name: RG_ring_2_62920 00:06:19.271 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:06:19.271 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:19.271 element at address: 0x200028464140 with size: 0.023804 MiB 00:06:19.271 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:19.271 element at address: 0x200003adb500 with size: 0.016174 MiB 00:06:19.271 associated memzone info: size: 0.015991 MiB name: RG_ring_3_62920 00:06:19.271 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:06:19.271 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:19.271 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:06:19.271 associated memzone info: size: 0.000183 MiB name: MP_msgpool_62920 00:06:19.271 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:06:19.271 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_62920 00:06:19.271 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:06:19.271 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:19.271 21:05:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:19.271 21:05:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 62920 00:06:19.271 21:05:30 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 62920 ']' 00:06:19.271 21:05:30 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 62920 00:06:19.271 21:05:30 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:19.271 21:05:30 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:19.271 21:05:30 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62920 00:06:19.271 21:05:30 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:19.271 21:05:30 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:19.271 21:05:30 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62920' 00:06:19.271 killing process with pid 62920 00:06:19.271 21:05:30 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 62920 00:06:19.271 21:05:30 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 62920 00:06:21.173 00:06:21.173 real 0m3.078s 00:06:21.173 user 0m3.183s 00:06:21.173 sys 0m0.419s 00:06:21.173 21:05:32 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.173 ************************************ 00:06:21.173 END TEST dpdk_mem_utility 00:06:21.173 ************************************ 00:06:21.173 21:05:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:21.173 21:05:32 -- common/autotest_common.sh@1142 -- # return 0 00:06:21.173 21:05:32 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:21.173 21:05:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.173 21:05:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.173 21:05:32 -- common/autotest_common.sh@10 -- # set +x 00:06:21.173 ************************************ 00:06:21.173 START TEST event 00:06:21.173 ************************************ 00:06:21.173 21:05:32 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:21.173 * Looking for test storage... 00:06:21.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:21.173 21:05:32 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:21.173 21:05:32 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:21.173 21:05:32 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:21.173 21:05:32 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:21.173 21:05:32 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.173 21:05:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.173 ************************************ 00:06:21.173 START TEST event_perf 00:06:21.173 ************************************ 00:06:21.173 21:05:32 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:21.173 Running I/O for 1 seconds...[2024-07-14 21:05:32.641254] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:21.173 [2024-07-14 21:05:32.641437] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63009 ] 00:06:21.431 [2024-07-14 21:05:32.812534] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:21.431 [2024-07-14 21:05:32.967291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.431 Running I/O for 1 seconds...[2024-07-14 21:05:32.967436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.431 [2024-07-14 21:05:32.967572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.431 [2024-07-14 21:05:32.967686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.840 00:06:22.840 lcore 0: 198559 00:06:22.840 lcore 1: 198558 00:06:22.840 lcore 2: 198558 00:06:22.840 lcore 3: 198560 00:06:22.840 done. 00:06:22.840 00:06:22.840 real 0m1.736s 00:06:22.840 user 0m4.515s 00:06:22.840 sys 0m0.102s 00:06:22.840 21:05:34 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.840 ************************************ 00:06:22.840 END TEST event_perf 00:06:22.840 21:05:34 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:22.840 ************************************ 00:06:22.840 21:05:34 event -- common/autotest_common.sh@1142 -- # return 0 00:06:22.840 21:05:34 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:22.840 21:05:34 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:22.840 21:05:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.840 21:05:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.098 ************************************ 00:06:23.098 START TEST event_reactor 00:06:23.098 ************************************ 00:06:23.098 21:05:34 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:23.098 [2024-07-14 21:05:34.432567] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:23.098 [2024-07-14 21:05:34.432790] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63054 ] 00:06:23.098 [2024-07-14 21:05:34.602706] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.355 [2024-07-14 21:05:34.753371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.731 test_start 00:06:24.731 oneshot 00:06:24.731 tick 100 00:06:24.731 tick 100 00:06:24.731 tick 250 00:06:24.731 tick 100 00:06:24.731 tick 100 00:06:24.731 tick 100 00:06:24.731 tick 250 00:06:24.731 tick 500 00:06:24.732 tick 100 00:06:24.732 tick 100 00:06:24.732 tick 250 00:06:24.732 tick 100 00:06:24.732 tick 100 00:06:24.732 test_end 00:06:24.732 00:06:24.732 real 0m1.693s 00:06:24.732 user 0m1.479s 00:06:24.732 sys 0m0.106s 00:06:24.732 21:05:36 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.732 21:05:36 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:24.732 ************************************ 00:06:24.732 END TEST event_reactor 00:06:24.732 ************************************ 00:06:24.732 21:05:36 event -- common/autotest_common.sh@1142 -- # return 0 00:06:24.732 21:05:36 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:24.732 21:05:36 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:24.732 21:05:36 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.732 21:05:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.732 ************************************ 00:06:24.732 START TEST event_reactor_perf 00:06:24.732 ************************************ 00:06:24.732 21:05:36 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:24.732 [2024-07-14 21:05:36.182123] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:24.732 [2024-07-14 21:05:36.182289] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63095 ] 00:06:24.991 [2024-07-14 21:05:36.351368] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.991 [2024-07-14 21:05:36.505192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.368 test_start 00:06:26.368 test_end 00:06:26.368 Performance: 338043 events per second 00:06:26.368 00:06:26.368 real 0m1.685s 00:06:26.368 user 0m1.481s 00:06:26.368 sys 0m0.096s 00:06:26.368 21:05:37 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.368 21:05:37 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.368 ************************************ 00:06:26.368 END TEST event_reactor_perf 00:06:26.368 ************************************ 00:06:26.368 21:05:37 event -- common/autotest_common.sh@1142 -- # return 0 00:06:26.368 21:05:37 event -- event/event.sh@49 -- # uname -s 00:06:26.368 21:05:37 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:26.368 21:05:37 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:26.368 21:05:37 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.368 21:05:37 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.368 21:05:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.368 ************************************ 00:06:26.368 START TEST event_scheduler 00:06:26.368 ************************************ 00:06:26.368 21:05:37 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:26.627 * Looking for test storage... 00:06:26.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:26.627 21:05:37 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:26.627 21:05:37 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=63153 00:06:26.627 21:05:37 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:26.627 21:05:37 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.627 21:05:37 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 63153 00:06:26.627 21:05:37 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 63153 ']' 00:06:26.627 21:05:37 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.627 21:05:37 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.627 21:05:37 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.627 21:05:37 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.627 21:05:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:26.627 [2024-07-14 21:05:38.076623] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:26.627 [2024-07-14 21:05:38.076856] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63153 ] 00:06:26.885 [2024-07-14 21:05:38.248911] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.143 [2024-07-14 21:05:38.473854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.143 [2024-07-14 21:05:38.473987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.143 [2024-07-14 21:05:38.474536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.143 [2024-07-14 21:05:38.474554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.709 21:05:38 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.709 21:05:38 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:27.709 21:05:38 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:27.709 21:05:38 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.709 21:05:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.709 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:27.709 POWER: Cannot set governor of lcore 0 to userspace 00:06:27.709 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:27.709 POWER: Cannot set governor of lcore 0 to performance 00:06:27.709 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:27.709 POWER: Cannot set governor of lcore 0 to userspace 00:06:27.709 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:27.709 POWER: Cannot set governor of lcore 0 to userspace 00:06:27.709 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:27.709 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:27.709 POWER: Unable to set Power Management Environment for lcore 0 00:06:27.709 [2024-07-14 21:05:38.980587] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:27.709 [2024-07-14 21:05:38.980608] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:27.709 [2024-07-14 21:05:38.980624] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:27.709 [2024-07-14 21:05:38.980679] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:27.709 [2024-07-14 21:05:38.980695] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:27.709 [2024-07-14 21:05:38.980706] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:27.709 21:05:38 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.709 21:05:38 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:27.709 21:05:38 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.709 21:05:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.709 [2024-07-14 21:05:39.210948] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:27.709 21:05:39 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.709 21:05:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:27.709 21:05:39 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.709 21:05:39 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.709 21:05:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.709 ************************************ 00:06:27.709 START TEST scheduler_create_thread 00:06:27.709 ************************************ 00:06:27.709 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:27.709 21:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:27.709 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.709 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.709 2 00:06:27.709 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.709 21:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:27.709 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.709 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.709 3 00:06:27.709 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.709 21:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:27.709 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.709 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.968 4 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.968 5 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.968 6 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.968 7 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.968 8 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.968 9 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.968 10 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.968 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.536 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.536 21:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:28.536 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.536 21:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.913 21:05:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.913 21:05:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:29.913 21:05:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:29.914 21:05:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.914 21:05:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.851 21:05:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.851 00:06:30.851 real 0m3.099s 00:06:30.851 user 0m0.014s 00:06:30.851 sys 0m0.010s 00:06:30.851 21:05:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.851 21:05:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.851 ************************************ 00:06:30.851 END TEST scheduler_create_thread 00:06:30.851 ************************************ 00:06:30.851 21:05:42 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:30.851 21:05:42 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:30.851 21:05:42 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 63153 00:06:30.851 21:05:42 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 63153 ']' 00:06:30.851 21:05:42 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 63153 00:06:30.851 21:05:42 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:30.851 21:05:42 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:30.851 21:05:42 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63153 00:06:30.851 21:05:42 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:30.851 killing process with pid 63153 00:06:30.851 21:05:42 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:30.851 21:05:42 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63153' 00:06:30.851 21:05:42 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 63153 00:06:30.851 21:05:42 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 63153 00:06:31.419 [2024-07-14 21:05:42.704245] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:32.353 00:06:32.353 real 0m5.841s 00:06:32.353 user 0m11.334s 00:06:32.353 sys 0m0.398s 00:06:32.353 21:05:43 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.353 21:05:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:32.353 ************************************ 00:06:32.353 END TEST event_scheduler 00:06:32.353 ************************************ 00:06:32.353 21:05:43 event -- common/autotest_common.sh@1142 -- # return 0 00:06:32.353 21:05:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:32.353 21:05:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:32.353 21:05:43 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.353 21:05:43 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.353 21:05:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.353 ************************************ 00:06:32.353 START TEST app_repeat 00:06:32.353 ************************************ 00:06:32.353 21:05:43 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:32.353 21:05:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.353 21:05:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.353 21:05:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:32.353 21:05:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.353 21:05:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:32.353 21:05:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:32.353 21:05:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:32.353 21:05:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=63270 00:06:32.353 21:05:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:32.353 21:05:43 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:32.353 Process app_repeat pid: 63270 00:06:32.353 21:05:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63270' 00:06:32.353 21:05:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:32.353 spdk_app_start Round 0 00:06:32.353 21:05:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:32.353 21:05:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63270 /var/tmp/spdk-nbd.sock 00:06:32.353 21:05:43 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63270 ']' 00:06:32.353 21:05:43 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:32.353 21:05:43 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:32.353 21:05:43 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:32.353 21:05:43 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.353 21:05:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:32.353 [2024-07-14 21:05:43.848633] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:32.353 [2024-07-14 21:05:43.848878] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63270 ] 00:06:32.611 [2024-07-14 21:05:44.013474] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.870 [2024-07-14 21:05:44.163789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.870 [2024-07-14 21:05:44.163855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.436 21:05:44 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.436 21:05:44 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:33.436 21:05:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.693 Malloc0 00:06:33.694 21:05:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.952 Malloc1 00:06:33.952 21:05:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.952 21:05:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.952 21:05:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.952 21:05:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:33.952 21:05:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.952 21:05:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:33.952 21:05:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.952 21:05:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.952 21:05:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.952 21:05:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:33.952 21:05:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.952 21:05:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:33.952 21:05:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:33.952 21:05:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:33.952 21:05:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.952 21:05:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:34.210 /dev/nbd0 00:06:34.210 21:05:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:34.210 21:05:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:34.210 21:05:45 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:34.210 21:05:45 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:34.210 21:05:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:34.210 21:05:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:34.210 21:05:45 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:34.210 21:05:45 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:34.210 21:05:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:34.210 21:05:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:34.210 21:05:45 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.210 1+0 records in 00:06:34.210 1+0 records out 00:06:34.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029306 s, 14.0 MB/s 00:06:34.210 21:05:45 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.210 21:05:45 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:34.210 21:05:45 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.210 21:05:45 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:34.210 21:05:45 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:34.210 21:05:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.210 21:05:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.210 21:05:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:34.469 /dev/nbd1 00:06:34.469 21:05:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:34.469 21:05:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:34.469 21:05:45 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:34.469 21:05:45 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:34.469 21:05:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:34.469 21:05:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:34.469 21:05:45 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:34.469 21:05:45 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:34.469 21:05:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:34.469 21:05:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:34.469 21:05:45 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.469 1+0 records in 00:06:34.469 1+0 records out 00:06:34.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385233 s, 10.6 MB/s 00:06:34.469 21:05:45 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.469 21:05:45 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:34.469 21:05:45 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.469 21:05:45 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:34.469 21:05:45 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:34.469 21:05:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.469 21:05:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.469 21:05:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.469 21:05:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.469 21:05:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.728 21:05:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:34.728 { 00:06:34.728 "nbd_device": "/dev/nbd0", 00:06:34.728 "bdev_name": "Malloc0" 00:06:34.728 }, 00:06:34.728 { 00:06:34.728 "nbd_device": "/dev/nbd1", 00:06:34.728 "bdev_name": "Malloc1" 00:06:34.728 } 00:06:34.728 ]' 00:06:34.728 21:05:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:34.728 { 00:06:34.728 "nbd_device": "/dev/nbd0", 00:06:34.728 "bdev_name": "Malloc0" 00:06:34.728 }, 00:06:34.728 { 00:06:34.728 "nbd_device": "/dev/nbd1", 00:06:34.728 "bdev_name": "Malloc1" 00:06:34.728 } 00:06:34.728 ]' 00:06:34.728 21:05:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.728 21:05:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:34.728 /dev/nbd1' 00:06:34.728 21:05:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:34.728 /dev/nbd1' 00:06:34.728 21:05:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.728 21:05:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:34.728 21:05:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:34.728 21:05:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:34.728 21:05:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:34.728 21:05:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:34.728 21:05:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.728 21:05:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:34.728 21:05:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:34.728 21:05:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:34.728 21:05:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:34.728 21:05:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:34.986 256+0 records in 00:06:34.986 256+0 records out 00:06:34.986 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00674935 s, 155 MB/s 00:06:34.986 21:05:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.986 21:05:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:34.986 256+0 records in 00:06:34.986 256+0 records out 00:06:34.986 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234684 s, 44.7 MB/s 00:06:34.986 21:05:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.986 21:05:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:34.986 256+0 records in 00:06:34.986 256+0 records out 00:06:34.986 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.032441 s, 32.3 MB/s 00:06:34.986 21:05:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:34.986 21:05:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.986 21:05:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:34.986 21:05:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:34.986 21:05:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:34.986 21:05:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:34.986 21:05:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:34.986 21:05:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.986 21:05:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:34.986 21:05:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.986 21:05:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:34.986 21:05:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:34.986 21:05:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:34.986 21:05:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.986 21:05:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.986 21:05:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:34.986 21:05:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:34.986 21:05:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.986 21:05:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:35.245 21:05:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:35.245 21:05:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:35.245 21:05:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:35.245 21:05:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.245 21:05:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.246 21:05:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:35.246 21:05:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:35.246 21:05:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.246 21:05:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.246 21:05:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:35.506 21:05:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:35.506 21:05:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:35.506 21:05:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:35.506 21:05:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.506 21:05:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.506 21:05:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:35.506 21:05:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:35.506 21:05:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.506 21:05:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.506 21:05:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.506 21:05:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.764 21:05:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:35.764 21:05:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:35.764 21:05:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.764 21:05:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:35.764 21:05:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:35.764 21:05:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.764 21:05:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:35.764 21:05:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:35.764 21:05:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:35.764 21:05:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:35.764 21:05:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:35.764 21:05:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:35.764 21:05:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:36.022 21:05:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:37.440 [2024-07-14 21:05:48.549520] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.440 [2024-07-14 21:05:48.704831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.440 [2024-07-14 21:05:48.704832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.440 [2024-07-14 21:05:48.848057] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:37.440 [2024-07-14 21:05:48.848172] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:39.343 spdk_app_start Round 1 00:06:39.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:39.343 21:05:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:39.343 21:05:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:39.343 21:05:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63270 /var/tmp/spdk-nbd.sock 00:06:39.343 21:05:50 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63270 ']' 00:06:39.343 21:05:50 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:39.343 21:05:50 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.343 21:05:50 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:39.343 21:05:50 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.343 21:05:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:39.343 21:05:50 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.343 21:05:50 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:39.343 21:05:50 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.601 Malloc0 00:06:39.601 21:05:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.860 Malloc1 00:06:39.860 21:05:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.860 21:05:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.860 21:05:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.860 21:05:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:39.860 21:05:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.860 21:05:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:39.860 21:05:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.860 21:05:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.860 21:05:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.860 21:05:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:39.860 21:05:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.860 21:05:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:39.860 21:05:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:39.860 21:05:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:39.860 21:05:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.860 21:05:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:40.118 /dev/nbd0 00:06:40.118 21:05:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:40.118 21:05:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:40.118 21:05:51 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:40.118 21:05:51 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:40.118 21:05:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:40.118 21:05:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:40.118 21:05:51 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:40.118 21:05:51 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:40.118 21:05:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:40.118 21:05:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:40.118 21:05:51 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.118 1+0 records in 00:06:40.118 1+0 records out 00:06:40.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041037 s, 10.0 MB/s 00:06:40.118 21:05:51 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:40.119 21:05:51 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:40.119 21:05:51 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:40.119 21:05:51 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:40.119 21:05:51 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:40.119 21:05:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.119 21:05:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.119 21:05:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:40.377 /dev/nbd1 00:06:40.378 21:05:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:40.378 21:05:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:40.378 21:05:51 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:40.378 21:05:51 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:40.378 21:05:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:40.378 21:05:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:40.378 21:05:51 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:40.378 21:05:51 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:40.378 21:05:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:40.378 21:05:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:40.378 21:05:51 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.378 1+0 records in 00:06:40.378 1+0 records out 00:06:40.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343507 s, 11.9 MB/s 00:06:40.378 21:05:51 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:40.378 21:05:51 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:40.378 21:05:51 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:40.378 21:05:51 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:40.378 21:05:51 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:40.378 21:05:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.378 21:05:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.378 21:05:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.378 21:05:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.378 21:05:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.636 21:05:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:40.636 { 00:06:40.636 "nbd_device": "/dev/nbd0", 00:06:40.636 "bdev_name": "Malloc0" 00:06:40.636 }, 00:06:40.636 { 00:06:40.636 "nbd_device": "/dev/nbd1", 00:06:40.636 "bdev_name": "Malloc1" 00:06:40.636 } 00:06:40.636 ]' 00:06:40.636 21:05:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:40.636 { 00:06:40.636 "nbd_device": "/dev/nbd0", 00:06:40.636 "bdev_name": "Malloc0" 00:06:40.636 }, 00:06:40.636 { 00:06:40.636 "nbd_device": "/dev/nbd1", 00:06:40.636 "bdev_name": "Malloc1" 00:06:40.636 } 00:06:40.636 ]' 00:06:40.636 21:05:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:40.896 /dev/nbd1' 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:40.896 /dev/nbd1' 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:40.896 256+0 records in 00:06:40.896 256+0 records out 00:06:40.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101279 s, 104 MB/s 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:40.896 256+0 records in 00:06:40.896 256+0 records out 00:06:40.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289908 s, 36.2 MB/s 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:40.896 256+0 records in 00:06:40.896 256+0 records out 00:06:40.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289883 s, 36.2 MB/s 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.896 21:05:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:41.155 21:05:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:41.155 21:05:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:41.155 21:05:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:41.155 21:05:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.155 21:05:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.155 21:05:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:41.155 21:05:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.155 21:05:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.155 21:05:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.155 21:05:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:41.414 21:05:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:41.414 21:05:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:41.414 21:05:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:41.414 21:05:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.414 21:05:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.414 21:05:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:41.414 21:05:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.414 21:05:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.414 21:05:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.414 21:05:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.414 21:05:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.673 21:05:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:41.673 21:05:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:41.673 21:05:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.673 21:05:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:41.673 21:05:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:41.673 21:05:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.673 21:05:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:41.673 21:05:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:41.673 21:05:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:41.673 21:05:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:41.673 21:05:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:41.673 21:05:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:41.673 21:05:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:41.931 21:05:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:43.304 [2024-07-14 21:05:54.432367] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.304 [2024-07-14 21:05:54.572538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.304 [2024-07-14 21:05:54.572539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.304 [2024-07-14 21:05:54.713914] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:43.304 [2024-07-14 21:05:54.713999] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:45.203 spdk_app_start Round 2 00:06:45.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.203 21:05:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:45.203 21:05:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:45.203 21:05:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63270 /var/tmp/spdk-nbd.sock 00:06:45.203 21:05:56 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63270 ']' 00:06:45.203 21:05:56 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.203 21:05:56 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.203 21:05:56 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.203 21:05:56 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.203 21:05:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:45.203 21:05:56 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.203 21:05:56 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:45.203 21:05:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.462 Malloc0 00:06:45.462 21:05:56 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.028 Malloc1 00:06:46.028 21:05:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.028 21:05:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.028 21:05:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.028 21:05:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:46.028 21:05:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.028 21:05:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:46.028 21:05:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.028 21:05:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.028 21:05:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.028 21:05:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:46.028 21:05:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.028 21:05:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:46.028 21:05:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:46.028 21:05:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:46.028 21:05:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.028 21:05:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:46.028 /dev/nbd0 00:06:46.028 21:05:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.028 21:05:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.028 21:05:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:46.028 21:05:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:46.028 21:05:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:46.028 21:05:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:46.028 21:05:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:46.286 21:05:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:46.286 21:05:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:46.286 21:05:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:46.286 21:05:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.286 1+0 records in 00:06:46.286 1+0 records out 00:06:46.286 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318915 s, 12.8 MB/s 00:06:46.286 21:05:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.286 21:05:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:46.286 21:05:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.286 21:05:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:46.286 21:05:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:46.286 21:05:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.286 21:05:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.286 21:05:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:46.286 /dev/nbd1 00:06:46.286 21:05:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:46.286 21:05:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:46.286 21:05:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:46.286 21:05:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:46.286 21:05:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:46.286 21:05:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:46.286 21:05:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:46.286 21:05:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:46.286 21:05:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:46.286 21:05:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:46.286 21:05:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.286 1+0 records in 00:06:46.286 1+0 records out 00:06:46.286 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298641 s, 13.7 MB/s 00:06:46.286 21:05:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.286 21:05:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:46.286 21:05:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.286 21:05:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:46.286 21:05:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:46.286 21:05:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.286 21:05:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.286 21:05:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.286 21:05:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.286 21:05:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.544 21:05:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.544 { 00:06:46.544 "nbd_device": "/dev/nbd0", 00:06:46.544 "bdev_name": "Malloc0" 00:06:46.544 }, 00:06:46.544 { 00:06:46.544 "nbd_device": "/dev/nbd1", 00:06:46.544 "bdev_name": "Malloc1" 00:06:46.544 } 00:06:46.544 ]' 00:06:46.544 21:05:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.544 { 00:06:46.544 "nbd_device": "/dev/nbd0", 00:06:46.544 "bdev_name": "Malloc0" 00:06:46.544 }, 00:06:46.544 { 00:06:46.544 "nbd_device": "/dev/nbd1", 00:06:46.544 "bdev_name": "Malloc1" 00:06:46.544 } 00:06:46.544 ]' 00:06:46.544 21:05:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:46.802 /dev/nbd1' 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:46.802 /dev/nbd1' 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:46.802 256+0 records in 00:06:46.802 256+0 records out 00:06:46.802 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0094428 s, 111 MB/s 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:46.802 256+0 records in 00:06:46.802 256+0 records out 00:06:46.802 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248601 s, 42.2 MB/s 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:46.802 256+0 records in 00:06:46.802 256+0 records out 00:06:46.802 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0294585 s, 35.6 MB/s 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:46.802 21:05:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.803 21:05:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:47.061 21:05:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.061 21:05:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.061 21:05:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.061 21:05:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.061 21:05:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.061 21:05:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.061 21:05:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.061 21:05:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.061 21:05:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.062 21:05:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:47.319 21:05:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:47.319 21:05:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:47.319 21:05:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:47.319 21:05:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.319 21:05:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.319 21:05:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:47.319 21:05:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.319 21:05:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.319 21:05:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.319 21:05:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.319 21:05:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.576 21:05:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:47.576 21:05:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:47.577 21:05:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.577 21:05:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:47.577 21:05:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.577 21:05:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:47.577 21:05:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:47.577 21:05:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:47.577 21:05:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:47.577 21:05:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:47.577 21:05:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:47.577 21:05:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:47.577 21:05:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:48.142 21:05:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:49.077 [2024-07-14 21:06:00.382825] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:49.077 [2024-07-14 21:06:00.557494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.077 [2024-07-14 21:06:00.557499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.336 [2024-07-14 21:06:00.706216] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:49.336 [2024-07-14 21:06:00.706335] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:51.294 21:06:02 event.app_repeat -- event/event.sh@38 -- # waitforlisten 63270 /var/tmp/spdk-nbd.sock 00:06:51.294 21:06:02 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63270 ']' 00:06:51.294 21:06:02 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.294 21:06:02 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.294 21:06:02 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.294 21:06:02 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.294 21:06:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.294 21:06:02 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.294 21:06:02 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:51.294 21:06:02 event.app_repeat -- event/event.sh@39 -- # killprocess 63270 00:06:51.294 21:06:02 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 63270 ']' 00:06:51.294 21:06:02 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 63270 00:06:51.294 21:06:02 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:51.294 21:06:02 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.294 21:06:02 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63270 00:06:51.294 21:06:02 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:51.294 killing process with pid 63270 00:06:51.294 21:06:02 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:51.294 21:06:02 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63270' 00:06:51.294 21:06:02 event.app_repeat -- common/autotest_common.sh@967 -- # kill 63270 00:06:51.294 21:06:02 event.app_repeat -- common/autotest_common.sh@972 -- # wait 63270 00:06:52.309 spdk_app_start is called in Round 0. 00:06:52.309 Shutdown signal received, stop current app iteration 00:06:52.309 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:52.309 spdk_app_start is called in Round 1. 00:06:52.309 Shutdown signal received, stop current app iteration 00:06:52.309 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:52.309 spdk_app_start is called in Round 2. 00:06:52.309 Shutdown signal received, stop current app iteration 00:06:52.309 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:52.309 spdk_app_start is called in Round 3. 00:06:52.309 Shutdown signal received, stop current app iteration 00:06:52.309 21:06:03 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:52.309 21:06:03 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:52.309 00:06:52.309 real 0m19.908s 00:06:52.309 user 0m42.980s 00:06:52.309 sys 0m2.537s 00:06:52.309 21:06:03 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.309 ************************************ 00:06:52.309 END TEST app_repeat 00:06:52.309 ************************************ 00:06:52.309 21:06:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:52.309 21:06:03 event -- common/autotest_common.sh@1142 -- # return 0 00:06:52.309 21:06:03 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:52.309 21:06:03 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:52.309 21:06:03 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.309 21:06:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.309 21:06:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.309 ************************************ 00:06:52.309 START TEST cpu_locks 00:06:52.309 ************************************ 00:06:52.309 21:06:03 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:52.309 * Looking for test storage... 00:06:52.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:52.309 21:06:03 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:52.309 21:06:03 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:52.309 21:06:03 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:52.309 21:06:03 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:52.309 21:06:03 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.309 21:06:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.309 21:06:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.309 ************************************ 00:06:52.309 START TEST default_locks 00:06:52.309 ************************************ 00:06:52.309 21:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:52.309 21:06:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=63717 00:06:52.309 21:06:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 63717 00:06:52.309 21:06:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:52.309 21:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 63717 ']' 00:06:52.309 21:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.309 21:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.309 21:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.309 21:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.309 21:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.567 [2024-07-14 21:06:03.940481] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:52.567 [2024-07-14 21:06:03.940639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63717 ] 00:06:52.567 [2024-07-14 21:06:04.098050] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.826 [2024-07-14 21:06:04.250741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.392 21:06:04 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.392 21:06:04 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:53.392 21:06:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 63717 00:06:53.392 21:06:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 63717 00:06:53.392 21:06:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.961 21:06:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 63717 00:06:53.961 21:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 63717 ']' 00:06:53.961 21:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 63717 00:06:53.961 21:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:53.961 21:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:53.961 21:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63717 00:06:53.961 21:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:53.961 21:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:53.961 killing process with pid 63717 00:06:53.961 21:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63717' 00:06:53.961 21:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 63717 00:06:53.961 21:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 63717 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 63717 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63717 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 63717 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 63717 ']' 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.866 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63717) - No such process 00:06:55.866 ERROR: process (pid: 63717) is no longer running 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:55.866 00:06:55.866 real 0m3.397s 00:06:55.866 user 0m3.545s 00:06:55.866 sys 0m0.526s 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.866 21:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.866 ************************************ 00:06:55.866 END TEST default_locks 00:06:55.866 ************************************ 00:06:55.866 21:06:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:55.866 21:06:07 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:55.866 21:06:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.866 21:06:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.866 21:06:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.866 ************************************ 00:06:55.866 START TEST default_locks_via_rpc 00:06:55.866 ************************************ 00:06:55.866 21:06:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:55.866 21:06:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=63781 00:06:55.866 21:06:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:55.866 21:06:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 63781 00:06:55.866 21:06:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63781 ']' 00:06:55.866 21:06:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.866 21:06:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.866 21:06:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.866 21:06:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.866 21:06:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.866 [2024-07-14 21:06:07.405966] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:55.867 [2024-07-14 21:06:07.406153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63781 ] 00:06:56.126 [2024-07-14 21:06:07.575694] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.385 [2024-07-14 21:06:07.751604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.954 21:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.954 21:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:56.954 21:06:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:56.954 21:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.954 21:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.954 21:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.954 21:06:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:56.954 21:06:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:56.954 21:06:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:56.954 21:06:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:56.954 21:06:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:56.954 21:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.954 21:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.954 21:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.954 21:06:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 63781 00:06:56.954 21:06:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 63781 00:06:56.954 21:06:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.524 21:06:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 63781 00:06:57.524 21:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 63781 ']' 00:06:57.524 21:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 63781 00:06:57.524 21:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:57.524 21:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.524 21:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63781 00:06:57.524 21:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.524 21:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.524 killing process with pid 63781 00:06:57.524 21:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63781' 00:06:57.524 21:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 63781 00:06:57.524 21:06:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 63781 00:06:59.431 00:06:59.431 real 0m3.464s 00:06:59.431 user 0m3.539s 00:06:59.431 sys 0m0.559s 00:06:59.431 21:06:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.431 21:06:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.431 ************************************ 00:06:59.431 END TEST default_locks_via_rpc 00:06:59.431 ************************************ 00:06:59.431 21:06:10 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:59.431 21:06:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:59.431 21:06:10 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.431 21:06:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.431 21:06:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.431 ************************************ 00:06:59.431 START TEST non_locking_app_on_locked_coremask 00:06:59.431 ************************************ 00:06:59.431 21:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:59.431 21:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63857 00:06:59.431 21:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 63857 /var/tmp/spdk.sock 00:06:59.431 21:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63857 ']' 00:06:59.431 21:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.431 21:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.431 21:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.431 21:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.431 21:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.431 21:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.431 [2024-07-14 21:06:10.923298] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:59.431 [2024-07-14 21:06:10.923478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63857 ] 00:06:59.691 [2024-07-14 21:06:11.095504] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.950 [2024-07-14 21:06:11.256782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.519 21:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.519 21:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:00.519 21:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:00.519 21:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63873 00:07:00.519 21:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 63873 /var/tmp/spdk2.sock 00:07:00.519 21:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63873 ']' 00:07:00.519 21:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.519 21:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.519 21:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.519 21:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.519 21:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.519 [2024-07-14 21:06:11.959243] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:00.519 [2024-07-14 21:06:11.959640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63873 ] 00:07:00.778 [2024-07-14 21:06:12.129118] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:00.778 [2024-07-14 21:06:12.129191] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.037 [2024-07-14 21:06:12.450204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.412 21:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.412 21:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:02.412 21:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 63857 00:07:02.412 21:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63857 00:07:02.412 21:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.979 21:06:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 63857 00:07:02.979 21:06:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63857 ']' 00:07:02.979 21:06:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63857 00:07:02.979 21:06:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:02.979 21:06:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.979 21:06:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63857 00:07:02.979 killing process with pid 63857 00:07:02.979 21:06:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:02.979 21:06:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:02.979 21:06:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63857' 00:07:02.979 21:06:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63857 00:07:02.979 21:06:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63857 00:07:07.166 21:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 63873 00:07:07.166 21:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63873 ']' 00:07:07.166 21:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63873 00:07:07.166 21:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:07.166 21:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:07.166 21:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63873 00:07:07.166 killing process with pid 63873 00:07:07.166 21:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:07.166 21:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:07.166 21:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63873' 00:07:07.166 21:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63873 00:07:07.166 21:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63873 00:07:08.542 ************************************ 00:07:08.542 END TEST non_locking_app_on_locked_coremask 00:07:08.542 ************************************ 00:07:08.542 00:07:08.542 real 0m8.943s 00:07:08.542 user 0m9.464s 00:07:08.542 sys 0m1.083s 00:07:08.542 21:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.542 21:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.542 21:06:19 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:08.542 21:06:19 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:08.542 21:06:19 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:08.542 21:06:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.542 21:06:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.542 ************************************ 00:07:08.542 START TEST locking_app_on_unlocked_coremask 00:07:08.543 ************************************ 00:07:08.543 21:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:08.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.543 21:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63987 00:07:08.543 21:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 63987 /var/tmp/spdk.sock 00:07:08.543 21:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63987 ']' 00:07:08.543 21:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.543 21:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:08.543 21:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.543 21:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.543 21:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.543 21:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.543 [2024-07-14 21:06:19.923148] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:08.543 [2024-07-14 21:06:19.923361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63987 ] 00:07:08.801 [2024-07-14 21:06:20.095990] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:08.801 [2024-07-14 21:06:20.096048] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.801 [2024-07-14 21:06:20.264852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.368 21:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.368 21:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:09.368 21:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:09.368 21:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=64003 00:07:09.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.368 21:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 64003 /var/tmp/spdk2.sock 00:07:09.368 21:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64003 ']' 00:07:09.368 21:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.368 21:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.368 21:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.368 21:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.368 21:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.627 [2024-07-14 21:06:20.946966] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:09.627 [2024-07-14 21:06:20.947388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64003 ] 00:07:09.627 [2024-07-14 21:06:21.114650] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.885 [2024-07-14 21:06:21.424875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.259 21:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.259 21:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:11.259 21:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 64003 00:07:11.259 21:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64003 00:07:11.259 21:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:12.196 21:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 63987 00:07:12.196 21:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63987 ']' 00:07:12.196 21:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63987 00:07:12.196 21:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:12.196 21:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:12.196 21:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63987 00:07:12.196 21:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:12.196 21:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:12.196 killing process with pid 63987 00:07:12.196 21:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63987' 00:07:12.196 21:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63987 00:07:12.196 21:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63987 00:07:15.479 21:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 64003 00:07:15.479 21:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64003 ']' 00:07:15.480 21:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 64003 00:07:15.480 21:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:15.480 21:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:15.480 21:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64003 00:07:15.480 killing process with pid 64003 00:07:15.480 21:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:15.480 21:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:15.480 21:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64003' 00:07:15.480 21:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 64003 00:07:15.480 21:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 64003 00:07:17.385 ************************************ 00:07:17.385 END TEST locking_app_on_unlocked_coremask 00:07:17.385 ************************************ 00:07:17.385 00:07:17.385 real 0m8.867s 00:07:17.385 user 0m9.337s 00:07:17.385 sys 0m1.117s 00:07:17.385 21:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.385 21:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.385 21:06:28 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:17.385 21:06:28 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:17.385 21:06:28 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.385 21:06:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.385 21:06:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.385 ************************************ 00:07:17.385 START TEST locking_app_on_locked_coremask 00:07:17.385 ************************************ 00:07:17.385 21:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:17.385 21:06:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64127 00:07:17.385 21:06:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 64127 /var/tmp/spdk.sock 00:07:17.385 21:06:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:17.385 21:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64127 ']' 00:07:17.385 21:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.385 21:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.385 21:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.385 21:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.386 21:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.386 [2024-07-14 21:06:28.840755] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:17.386 [2024-07-14 21:06:28.840956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64127 ] 00:07:17.645 [2024-07-14 21:06:29.011963] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.645 [2024-07-14 21:06:29.166017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.213 21:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.213 21:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:18.213 21:06:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64143 00:07:18.213 21:06:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64143 /var/tmp/spdk2.sock 00:07:18.213 21:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:18.213 21:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64143 /var/tmp/spdk2.sock 00:07:18.213 21:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:18.213 21:06:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:18.213 21:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.213 21:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:18.213 21:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.213 21:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64143 /var/tmp/spdk2.sock 00:07:18.213 21:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64143 ']' 00:07:18.213 21:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.213 21:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.213 21:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.213 21:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.213 21:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.472 [2024-07-14 21:06:29.869782] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:18.472 [2024-07-14 21:06:29.870006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64143 ] 00:07:18.731 [2024-07-14 21:06:30.047617] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64127 has claimed it. 00:07:18.731 [2024-07-14 21:06:30.047721] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:18.991 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64143) - No such process 00:07:18.991 ERROR: process (pid: 64143) is no longer running 00:07:18.991 21:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.991 21:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:18.991 21:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:18.991 21:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:18.991 21:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:18.991 21:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:18.991 21:06:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 64127 00:07:18.991 21:06:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64127 00:07:18.991 21:06:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:19.563 21:06:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 64127 00:07:19.563 21:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64127 ']' 00:07:19.563 21:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64127 00:07:19.563 21:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:19.563 21:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:19.563 21:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64127 00:07:19.563 21:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:19.563 killing process with pid 64127 00:07:19.563 21:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:19.563 21:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64127' 00:07:19.563 21:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64127 00:07:19.563 21:06:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64127 00:07:21.517 00:07:21.517 real 0m3.899s 00:07:21.517 user 0m4.279s 00:07:21.517 sys 0m0.648s 00:07:21.517 21:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.517 21:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.517 ************************************ 00:07:21.517 END TEST locking_app_on_locked_coremask 00:07:21.517 ************************************ 00:07:21.517 21:06:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:21.517 21:06:32 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:21.517 21:06:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.517 21:06:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.517 21:06:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.517 ************************************ 00:07:21.517 START TEST locking_overlapped_coremask 00:07:21.517 ************************************ 00:07:21.517 21:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:21.517 21:06:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64202 00:07:21.517 21:06:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 64202 /var/tmp/spdk.sock 00:07:21.517 21:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64202 ']' 00:07:21.517 21:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.517 21:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.517 21:06:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:21.517 21:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.517 21:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.517 21:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.517 [2024-07-14 21:06:32.794535] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:21.517 [2024-07-14 21:06:32.794718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64202 ] 00:07:21.517 [2024-07-14 21:06:32.963980] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:21.776 [2024-07-14 21:06:33.123691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.776 [2024-07-14 21:06:33.123787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.776 [2024-07-14 21:06:33.123841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.345 21:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.345 21:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:22.345 21:06:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64220 00:07:22.345 21:06:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:22.345 21:06:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64220 /var/tmp/spdk2.sock 00:07:22.345 21:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:22.345 21:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64220 /var/tmp/spdk2.sock 00:07:22.345 21:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:22.345 21:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.345 21:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:22.345 21:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.345 21:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64220 /var/tmp/spdk2.sock 00:07:22.345 21:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64220 ']' 00:07:22.345 21:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:22.345 21:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.345 21:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:22.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:22.345 21:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.346 21:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.346 [2024-07-14 21:06:33.890276] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:22.346 [2024-07-14 21:06:33.890439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64220 ] 00:07:22.605 [2024-07-14 21:06:34.067997] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64202 has claimed it. 00:07:22.605 [2024-07-14 21:06:34.068101] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:23.173 ERROR: process (pid: 64220) is no longer running 00:07:23.173 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64220) - No such process 00:07:23.173 21:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.173 21:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:23.173 21:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:23.173 21:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:23.173 21:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:23.173 21:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:23.173 21:06:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:23.173 21:06:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:23.173 21:06:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:23.173 21:06:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:23.173 21:06:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 64202 00:07:23.173 21:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 64202 ']' 00:07:23.173 21:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 64202 00:07:23.173 21:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:23.173 21:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:23.173 21:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64202 00:07:23.173 21:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:23.173 killing process with pid 64202 00:07:23.173 21:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:23.173 21:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64202' 00:07:23.173 21:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 64202 00:07:23.173 21:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 64202 00:07:25.080 00:07:25.080 real 0m3.757s 00:07:25.080 user 0m9.883s 00:07:25.080 sys 0m0.561s 00:07:25.080 21:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.080 21:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.080 ************************************ 00:07:25.080 END TEST locking_overlapped_coremask 00:07:25.080 ************************************ 00:07:25.080 21:06:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:25.080 21:06:36 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:25.080 21:06:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.080 21:06:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.080 21:06:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.080 ************************************ 00:07:25.080 START TEST locking_overlapped_coremask_via_rpc 00:07:25.080 ************************************ 00:07:25.080 21:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:25.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.080 21:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=64284 00:07:25.080 21:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 64284 /var/tmp/spdk.sock 00:07:25.080 21:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:25.080 21:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64284 ']' 00:07:25.080 21:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.080 21:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.080 21:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.080 21:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.080 21:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.080 [2024-07-14 21:06:36.574662] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:25.080 [2024-07-14 21:06:36.574865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64284 ] 00:07:25.338 [2024-07-14 21:06:36.727071] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:25.338 [2024-07-14 21:06:36.727136] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.338 [2024-07-14 21:06:36.874177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.338 [2024-07-14 21:06:36.874277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.338 [2024-07-14 21:06:36.874292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:26.274 21:06:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.274 21:06:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:26.274 21:06:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=64302 00:07:26.274 21:06:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:26.274 21:06:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 64302 /var/tmp/spdk2.sock 00:07:26.274 21:06:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64302 ']' 00:07:26.274 21:06:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:26.274 21:06:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.274 21:06:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:26.274 21:06:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.274 21:06:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.274 [2024-07-14 21:06:37.635473] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:26.274 [2024-07-14 21:06:37.636283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64302 ] 00:07:26.274 [2024-07-14 21:06:37.815095] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:26.274 [2024-07-14 21:06:37.815204] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:26.842 [2024-07-14 21:06:38.173601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.842 [2024-07-14 21:06:38.176968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.842 [2024-07-14 21:06:38.176986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.219 [2024-07-14 21:06:39.482092] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64284 has claimed it. 00:07:28.219 request: 00:07:28.219 { 00:07:28.219 "method": "framework_enable_cpumask_locks", 00:07:28.219 "req_id": 1 00:07:28.219 } 00:07:28.219 Got JSON-RPC error response 00:07:28.219 response: 00:07:28.219 { 00:07:28.219 "code": -32603, 00:07:28.219 "message": "Failed to claim CPU core: 2" 00:07:28.219 } 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 64284 /var/tmp/spdk.sock 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64284 ']' 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:28.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 64302 /var/tmp/spdk2.sock 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64302 ']' 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:28.219 21:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.786 ************************************ 00:07:28.786 END TEST locking_overlapped_coremask_via_rpc 00:07:28.786 ************************************ 00:07:28.786 21:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.786 21:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:28.786 21:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:28.786 21:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:28.786 21:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:28.786 21:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:28.786 00:07:28.786 real 0m3.556s 00:07:28.786 user 0m1.314s 00:07:28.786 sys 0m0.195s 00:07:28.786 21:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.786 21:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.786 21:06:40 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:28.786 21:06:40 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:28.786 21:06:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64284 ]] 00:07:28.786 21:06:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64284 00:07:28.786 21:06:40 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64284 ']' 00:07:28.786 21:06:40 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64284 00:07:28.786 21:06:40 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:28.786 21:06:40 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:28.786 21:06:40 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64284 00:07:28.786 killing process with pid 64284 00:07:28.786 21:06:40 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:28.786 21:06:40 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:28.786 21:06:40 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64284' 00:07:28.786 21:06:40 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 64284 00:07:28.786 21:06:40 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 64284 00:07:30.690 21:06:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64302 ]] 00:07:30.690 21:06:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64302 00:07:30.690 21:06:42 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64302 ']' 00:07:30.690 21:06:42 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64302 00:07:30.690 21:06:42 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:30.690 21:06:42 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:30.690 21:06:42 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64302 00:07:30.690 killing process with pid 64302 00:07:30.690 21:06:42 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:30.690 21:06:42 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:30.690 21:06:42 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64302' 00:07:30.690 21:06:42 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 64302 00:07:30.690 21:06:42 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 64302 00:07:32.592 21:06:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:32.592 Process with pid 64284 is not found 00:07:32.592 Process with pid 64302 is not found 00:07:32.592 21:06:43 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:32.592 21:06:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64284 ]] 00:07:32.592 21:06:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64284 00:07:32.592 21:06:43 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64284 ']' 00:07:32.592 21:06:43 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64284 00:07:32.592 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (64284) - No such process 00:07:32.592 21:06:43 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 64284 is not found' 00:07:32.592 21:06:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64302 ]] 00:07:32.592 21:06:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64302 00:07:32.592 21:06:43 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64302 ']' 00:07:32.592 21:06:43 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64302 00:07:32.592 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (64302) - No such process 00:07:32.592 21:06:43 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 64302 is not found' 00:07:32.592 21:06:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:32.592 00:07:32.592 real 0m40.223s 00:07:32.592 user 1m9.224s 00:07:32.592 sys 0m5.555s 00:07:32.592 21:06:43 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.592 ************************************ 00:07:32.592 END TEST cpu_locks 00:07:32.592 ************************************ 00:07:32.592 21:06:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:32.592 21:06:44 event -- common/autotest_common.sh@1142 -- # return 0 00:07:32.592 00:07:32.592 real 1m11.510s 00:07:32.592 user 2m11.155s 00:07:32.592 sys 0m9.034s 00:07:32.592 ************************************ 00:07:32.592 END TEST event 00:07:32.592 ************************************ 00:07:32.592 21:06:44 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.592 21:06:44 event -- common/autotest_common.sh@10 -- # set +x 00:07:32.592 21:06:44 -- common/autotest_common.sh@1142 -- # return 0 00:07:32.592 21:06:44 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:32.592 21:06:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:32.592 21:06:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.592 21:06:44 -- common/autotest_common.sh@10 -- # set +x 00:07:32.592 ************************************ 00:07:32.592 START TEST thread 00:07:32.592 ************************************ 00:07:32.592 21:06:44 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:32.592 * Looking for test storage... 00:07:32.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:32.592 21:06:44 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:32.592 21:06:44 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:32.592 21:06:44 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.592 21:06:44 thread -- common/autotest_common.sh@10 -- # set +x 00:07:32.850 ************************************ 00:07:32.850 START TEST thread_poller_perf 00:07:32.850 ************************************ 00:07:32.850 21:06:44 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:32.850 [2024-07-14 21:06:44.181405] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:32.850 [2024-07-14 21:06:44.181537] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64466 ] 00:07:32.850 [2024-07-14 21:06:44.342299] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.108 [2024-07-14 21:06:44.570006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.108 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:34.559 ====================================== 00:07:34.559 busy:2210299602 (cyc) 00:07:34.559 total_run_count: 353000 00:07:34.559 tsc_hz: 2200000000 (cyc) 00:07:34.559 ====================================== 00:07:34.559 poller_cost: 6261 (cyc), 2845 (nsec) 00:07:34.559 00:07:34.559 real 0m1.768s 00:07:34.559 user 0m1.564s 00:07:34.559 sys 0m0.095s 00:07:34.559 ************************************ 00:07:34.559 END TEST thread_poller_perf 00:07:34.559 ************************************ 00:07:34.559 21:06:45 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.559 21:06:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:34.560 21:06:45 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:34.560 21:06:45 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:34.560 21:06:45 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:34.560 21:06:45 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.560 21:06:45 thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.560 ************************************ 00:07:34.560 START TEST thread_poller_perf 00:07:34.560 ************************************ 00:07:34.560 21:06:45 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:34.560 [2024-07-14 21:06:46.011028] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:34.560 [2024-07-14 21:06:46.011179] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64508 ] 00:07:34.818 [2024-07-14 21:06:46.179610] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.818 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:34.818 [2024-07-14 21:06:46.330920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.195 ====================================== 00:07:36.195 busy:2203366464 (cyc) 00:07:36.195 total_run_count: 4392000 00:07:36.195 tsc_hz: 2200000000 (cyc) 00:07:36.195 ====================================== 00:07:36.195 poller_cost: 501 (cyc), 227 (nsec) 00:07:36.195 00:07:36.195 real 0m1.714s 00:07:36.195 user 0m1.520s 00:07:36.195 sys 0m0.087s 00:07:36.195 21:06:47 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.195 21:06:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:36.195 ************************************ 00:07:36.195 END TEST thread_poller_perf 00:07:36.195 ************************************ 00:07:36.195 21:06:47 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:36.195 21:06:47 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:36.195 ************************************ 00:07:36.195 END TEST thread 00:07:36.195 ************************************ 00:07:36.195 00:07:36.195 real 0m3.669s 00:07:36.195 user 0m3.160s 00:07:36.195 sys 0m0.283s 00:07:36.195 21:06:47 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.195 21:06:47 thread -- common/autotest_common.sh@10 -- # set +x 00:07:36.455 21:06:47 -- common/autotest_common.sh@1142 -- # return 0 00:07:36.455 21:06:47 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:36.455 21:06:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:36.455 21:06:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.455 21:06:47 -- common/autotest_common.sh@10 -- # set +x 00:07:36.455 ************************************ 00:07:36.455 START TEST accel 00:07:36.455 ************************************ 00:07:36.455 21:06:47 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:36.455 * Looking for test storage... 00:07:36.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:36.455 21:06:47 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:36.455 21:06:47 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:36.455 21:06:47 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:36.455 21:06:47 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=64589 00:07:36.455 21:06:47 accel -- accel/accel.sh@63 -- # waitforlisten 64589 00:07:36.455 21:06:47 accel -- common/autotest_common.sh@829 -- # '[' -z 64589 ']' 00:07:36.455 21:06:47 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.455 21:06:47 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:36.455 21:06:47 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.455 21:06:47 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:36.455 21:06:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.455 21:06:47 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:36.455 21:06:47 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:36.455 21:06:47 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.455 21:06:47 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.455 21:06:47 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.455 21:06:47 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.455 21:06:47 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.455 21:06:47 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:36.455 21:06:47 accel -- accel/accel.sh@41 -- # jq -r . 00:07:36.455 [2024-07-14 21:06:47.988084] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:36.455 [2024-07-14 21:06:47.988259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64589 ] 00:07:36.715 [2024-07-14 21:06:48.164017] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.974 [2024-07-14 21:06:48.360072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.543 21:06:48 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:37.543 21:06:48 accel -- common/autotest_common.sh@862 -- # return 0 00:07:37.543 21:06:48 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:37.543 21:06:48 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:37.543 21:06:48 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:37.543 21:06:48 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:37.543 21:06:48 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:37.543 21:06:48 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:37.543 21:06:48 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:37.543 21:06:48 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.543 21:06:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.543 21:06:49 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.543 21:06:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.543 21:06:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:37.543 21:06:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:37.543 21:06:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.543 21:06:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.543 21:06:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:37.543 21:06:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:37.543 21:06:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.543 21:06:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.543 21:06:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:37.543 21:06:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:37.543 21:06:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.543 21:06:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.543 21:06:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:37.543 21:06:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:37.543 21:06:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.543 21:06:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.543 21:06:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:37.543 21:06:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:37.543 21:06:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.543 21:06:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.543 21:06:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:37.543 21:06:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:37.543 21:06:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.543 21:06:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.543 21:06:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:37.543 21:06:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:37.543 21:06:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.543 21:06:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.543 21:06:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:37.543 21:06:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:37.544 21:06:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.544 21:06:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.544 21:06:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:37.544 21:06:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:37.544 21:06:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.544 21:06:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.544 21:06:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:37.544 21:06:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:37.544 21:06:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.544 21:06:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.544 21:06:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:37.544 21:06:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:37.544 21:06:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.544 21:06:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.544 21:06:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:37.544 21:06:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:37.544 21:06:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.544 21:06:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.544 21:06:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:37.544 21:06:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:37.544 21:06:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.544 21:06:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.544 21:06:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:37.544 21:06:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:37.544 21:06:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.544 21:06:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:37.544 21:06:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:37.544 21:06:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:37.544 21:06:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:37.544 21:06:49 accel -- accel/accel.sh@75 -- # killprocess 64589 00:07:37.544 21:06:49 accel -- common/autotest_common.sh@948 -- # '[' -z 64589 ']' 00:07:37.544 21:06:49 accel -- common/autotest_common.sh@952 -- # kill -0 64589 00:07:37.544 21:06:49 accel -- common/autotest_common.sh@953 -- # uname 00:07:37.544 21:06:49 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:37.544 21:06:49 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64589 00:07:37.803 killing process with pid 64589 00:07:37.803 21:06:49 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:37.803 21:06:49 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:37.803 21:06:49 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64589' 00:07:37.803 21:06:49 accel -- common/autotest_common.sh@967 -- # kill 64589 00:07:37.803 21:06:49 accel -- common/autotest_common.sh@972 -- # wait 64589 00:07:39.708 21:06:50 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:39.708 21:06:50 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:39.708 21:06:50 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:39.708 21:06:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.708 21:06:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.708 21:06:50 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:39.708 21:06:50 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:39.708 21:06:50 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:39.708 21:06:50 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.708 21:06:50 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.708 21:06:50 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.708 21:06:50 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.708 21:06:50 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.708 21:06:50 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:39.708 21:06:50 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:39.708 21:06:50 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.708 21:06:50 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:39.708 21:06:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:39.708 21:06:50 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:39.708 21:06:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:39.708 21:06:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.708 21:06:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.708 ************************************ 00:07:39.708 START TEST accel_missing_filename 00:07:39.708 ************************************ 00:07:39.708 21:06:50 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:39.708 21:06:50 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:39.708 21:06:50 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:39.708 21:06:50 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:39.708 21:06:50 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:39.708 21:06:50 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:39.708 21:06:50 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:39.708 21:06:50 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:39.708 21:06:50 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:39.708 21:06:50 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:39.708 21:06:50 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.708 21:06:50 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.708 21:06:50 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.708 21:06:50 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.708 21:06:50 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.708 21:06:50 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:39.708 21:06:50 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:39.708 [2024-07-14 21:06:51.025245] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:39.708 [2024-07-14 21:06:51.025416] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64659 ] 00:07:39.708 [2024-07-14 21:06:51.193292] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.967 [2024-07-14 21:06:51.339836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.967 [2024-07-14 21:06:51.492427] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.534 [2024-07-14 21:06:51.867999] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:40.793 A filename is required. 00:07:40.793 21:06:52 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:40.793 21:06:52 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:40.793 21:06:52 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:40.793 21:06:52 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:40.793 21:06:52 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:40.793 21:06:52 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:40.793 00:07:40.793 real 0m1.238s 00:07:40.793 user 0m1.035s 00:07:40.793 sys 0m0.133s 00:07:40.793 21:06:52 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.793 21:06:52 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:40.793 ************************************ 00:07:40.793 END TEST accel_missing_filename 00:07:40.793 ************************************ 00:07:40.793 21:06:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:40.793 21:06:52 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:40.793 21:06:52 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:40.793 21:06:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.793 21:06:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.793 ************************************ 00:07:40.793 START TEST accel_compress_verify 00:07:40.793 ************************************ 00:07:40.793 21:06:52 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:40.793 21:06:52 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:40.793 21:06:52 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:40.793 21:06:52 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:40.793 21:06:52 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:40.793 21:06:52 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:40.793 21:06:52 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:40.793 21:06:52 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:40.793 21:06:52 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:40.793 21:06:52 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:40.793 21:06:52 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.793 21:06:52 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.793 21:06:52 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.793 21:06:52 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.793 21:06:52 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.794 21:06:52 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:40.794 21:06:52 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:40.794 [2024-07-14 21:06:52.317404] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:40.794 [2024-07-14 21:06:52.317563] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64690 ] 00:07:41.052 [2024-07-14 21:06:52.487959] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.312 [2024-07-14 21:06:52.643917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.312 [2024-07-14 21:06:52.791904] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.899 [2024-07-14 21:06:53.160249] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:42.160 00:07:42.160 Compression does not support the verify option, aborting. 00:07:42.160 21:06:53 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:42.160 21:06:53 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:42.160 21:06:53 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:42.160 21:06:53 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:42.160 21:06:53 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:42.160 21:06:53 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:42.160 00:07:42.160 real 0m1.233s 00:07:42.160 user 0m1.036s 00:07:42.160 sys 0m0.141s 00:07:42.160 21:06:53 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.160 21:06:53 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:42.160 ************************************ 00:07:42.160 END TEST accel_compress_verify 00:07:42.160 ************************************ 00:07:42.160 21:06:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:42.160 21:06:53 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:42.160 21:06:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:42.160 21:06:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.160 21:06:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.160 ************************************ 00:07:42.160 START TEST accel_wrong_workload 00:07:42.160 ************************************ 00:07:42.160 21:06:53 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:42.160 21:06:53 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:42.160 21:06:53 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:42.160 21:06:53 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:42.160 21:06:53 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.160 21:06:53 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:42.160 21:06:53 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.160 21:06:53 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:42.160 21:06:53 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:42.160 21:06:53 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:42.160 21:06:53 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.160 21:06:53 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.160 21:06:53 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.160 21:06:53 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.160 21:06:53 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.160 21:06:53 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:42.160 21:06:53 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:42.160 Unsupported workload type: foobar 00:07:42.160 [2024-07-14 21:06:53.602566] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:42.160 accel_perf options: 00:07:42.160 [-h help message] 00:07:42.160 [-q queue depth per core] 00:07:42.160 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:42.160 [-T number of threads per core 00:07:42.160 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:42.160 [-t time in seconds] 00:07:42.160 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:42.160 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:42.160 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:42.160 [-l for compress/decompress workloads, name of uncompressed input file 00:07:42.160 [-S for crc32c workload, use this seed value (default 0) 00:07:42.160 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:42.160 [-f for fill workload, use this BYTE value (default 255) 00:07:42.160 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:42.160 [-y verify result if this switch is on] 00:07:42.160 [-a tasks to allocate per core (default: same value as -q)] 00:07:42.160 Can be used to spread operations across a wider range of memory. 00:07:42.160 21:06:53 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:42.160 21:06:53 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:42.160 21:06:53 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:42.160 21:06:53 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:42.160 00:07:42.160 real 0m0.078s 00:07:42.160 user 0m0.085s 00:07:42.160 sys 0m0.042s 00:07:42.160 21:06:53 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.160 21:06:53 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:42.160 ************************************ 00:07:42.160 END TEST accel_wrong_workload 00:07:42.160 ************************************ 00:07:42.160 21:06:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:42.160 21:06:53 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:42.160 21:06:53 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:42.160 21:06:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.160 21:06:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.160 ************************************ 00:07:42.160 START TEST accel_negative_buffers 00:07:42.160 ************************************ 00:07:42.160 21:06:53 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:42.160 21:06:53 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:42.160 21:06:53 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:42.160 21:06:53 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:42.160 21:06:53 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.160 21:06:53 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:42.160 21:06:53 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.160 21:06:53 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:42.160 21:06:53 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:42.160 21:06:53 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:42.160 21:06:53 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.160 21:06:53 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.160 21:06:53 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.160 21:06:53 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.160 21:06:53 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.160 21:06:53 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:42.160 21:06:53 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:42.419 -x option must be non-negative. 00:07:42.419 [2024-07-14 21:06:53.749943] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:42.419 accel_perf options: 00:07:42.419 [-h help message] 00:07:42.419 [-q queue depth per core] 00:07:42.419 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:42.419 [-T number of threads per core 00:07:42.419 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:42.419 [-t time in seconds] 00:07:42.419 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:42.419 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:42.419 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:42.419 [-l for compress/decompress workloads, name of uncompressed input file 00:07:42.419 [-S for crc32c workload, use this seed value (default 0) 00:07:42.419 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:42.419 [-f for fill workload, use this BYTE value (default 255) 00:07:42.419 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:42.419 [-y verify result if this switch is on] 00:07:42.419 [-a tasks to allocate per core (default: same value as -q)] 00:07:42.419 Can be used to spread operations across a wider range of memory. 00:07:42.419 21:06:53 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:42.419 21:06:53 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:42.419 21:06:53 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:42.419 21:06:53 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:42.419 00:07:42.419 real 0m0.102s 00:07:42.419 user 0m0.118s 00:07:42.419 sys 0m0.058s 00:07:42.419 21:06:53 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.419 ************************************ 00:07:42.419 21:06:53 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:42.419 END TEST accel_negative_buffers 00:07:42.419 ************************************ 00:07:42.419 21:06:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:42.419 21:06:53 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:42.419 21:06:53 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:42.419 21:06:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.419 21:06:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.420 ************************************ 00:07:42.420 START TEST accel_crc32c 00:07:42.420 ************************************ 00:07:42.420 21:06:53 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:42.420 21:06:53 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:42.420 21:06:53 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:42.420 21:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.420 21:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.420 21:06:53 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:42.420 21:06:53 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:42.420 21:06:53 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:42.420 21:06:53 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.420 21:06:53 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.420 21:06:53 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.420 21:06:53 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.420 21:06:53 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.420 21:06:53 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:42.420 21:06:53 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:42.420 [2024-07-14 21:06:53.884043] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:42.420 [2024-07-14 21:06:53.884206] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64763 ] 00:07:42.679 [2024-07-14 21:06:54.049895] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.679 [2024-07-14 21:06:54.197498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.938 21:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:44.843 21:06:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.843 00:07:44.843 real 0m2.218s 00:07:44.843 user 0m1.984s 00:07:44.843 sys 0m0.144s 00:07:44.843 21:06:56 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.843 21:06:56 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:44.843 ************************************ 00:07:44.843 END TEST accel_crc32c 00:07:44.843 ************************************ 00:07:44.843 21:06:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:44.843 21:06:56 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:44.843 21:06:56 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:44.843 21:06:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.843 21:06:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.844 ************************************ 00:07:44.844 START TEST accel_crc32c_C2 00:07:44.844 ************************************ 00:07:44.844 21:06:56 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:44.844 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:44.844 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:44.844 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:44.844 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:44.844 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:44.844 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:44.844 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.844 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.844 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.844 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.844 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.844 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.844 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:44.844 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:44.844 [2024-07-14 21:06:56.150513] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:44.844 [2024-07-14 21:06:56.150687] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64809 ] 00:07:44.844 [2024-07-14 21:06:56.318857] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.102 [2024-07-14 21:06:56.474063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.102 21:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.003 00:07:47.003 real 0m2.229s 00:07:47.003 user 0m1.989s 00:07:47.003 sys 0m0.148s 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.003 21:06:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:47.003 ************************************ 00:07:47.003 END TEST accel_crc32c_C2 00:07:47.003 ************************************ 00:07:47.003 21:06:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:47.003 21:06:58 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:47.003 21:06:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:47.003 21:06:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.003 21:06:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.003 ************************************ 00:07:47.003 START TEST accel_copy 00:07:47.003 ************************************ 00:07:47.003 21:06:58 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:47.003 21:06:58 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:47.003 21:06:58 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:47.003 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.003 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.003 21:06:58 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:47.003 21:06:58 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:47.003 21:06:58 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:47.003 21:06:58 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.003 21:06:58 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.003 21:06:58 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.003 21:06:58 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.003 21:06:58 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.003 21:06:58 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:47.003 21:06:58 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:47.003 [2024-07-14 21:06:58.435916] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:47.003 [2024-07-14 21:06:58.436162] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64850 ] 00:07:47.261 [2024-07-14 21:06:58.605971] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.261 [2024-07-14 21:06:58.756537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.519 21:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:47.519 21:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.519 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.519 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.520 21:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:49.423 21:07:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.423 00:07:49.423 real 0m2.287s 00:07:49.423 user 0m2.065s 00:07:49.423 sys 0m0.131s 00:07:49.423 21:07:00 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.423 ************************************ 00:07:49.423 END TEST accel_copy 00:07:49.423 21:07:00 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:49.423 ************************************ 00:07:49.423 21:07:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:49.423 21:07:00 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:49.423 21:07:00 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:49.423 21:07:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.423 21:07:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.423 ************************************ 00:07:49.423 START TEST accel_fill 00:07:49.423 ************************************ 00:07:49.423 21:07:00 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:49.423 21:07:00 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:49.423 21:07:00 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:49.423 21:07:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.423 21:07:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.423 21:07:00 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:49.423 21:07:00 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:49.423 21:07:00 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:49.423 21:07:00 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.423 21:07:00 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.423 21:07:00 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.423 21:07:00 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.423 21:07:00 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.423 21:07:00 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:49.423 21:07:00 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:49.423 [2024-07-14 21:07:00.774968] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:49.423 [2024-07-14 21:07:00.775178] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64891 ] 00:07:49.423 [2024-07-14 21:07:00.947793] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.682 [2024-07-14 21:07:01.104020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.941 21:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:51.849 21:07:03 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.849 00:07:51.849 real 0m2.300s 00:07:51.849 user 0m2.059s 00:07:51.849 sys 0m0.143s 00:07:51.849 21:07:03 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.849 ************************************ 00:07:51.849 END TEST accel_fill 00:07:51.849 ************************************ 00:07:51.849 21:07:03 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:51.849 21:07:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:51.849 21:07:03 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:51.849 21:07:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:51.849 21:07:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.849 21:07:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:51.849 ************************************ 00:07:51.849 START TEST accel_copy_crc32c 00:07:51.849 ************************************ 00:07:51.849 21:07:03 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:51.849 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:51.849 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:51.849 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.849 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.849 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:51.849 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:51.849 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:51.849 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:51.849 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:51.849 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.849 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.849 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:51.849 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:51.849 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:51.849 [2024-07-14 21:07:03.128342] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:51.849 [2024-07-14 21:07:03.128571] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64938 ] 00:07:51.849 [2024-07-14 21:07:03.303170] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.112 [2024-07-14 21:07:03.518766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.370 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.371 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.371 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.371 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.371 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.371 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.371 21:07:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.295 00:07:54.295 real 0m2.317s 00:07:54.295 user 0m2.071s 00:07:54.295 sys 0m0.149s 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.295 21:07:05 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:54.295 ************************************ 00:07:54.295 END TEST accel_copy_crc32c 00:07:54.295 ************************************ 00:07:54.295 21:07:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:54.295 21:07:05 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:54.295 21:07:05 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:54.295 21:07:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.295 21:07:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:54.295 ************************************ 00:07:54.295 START TEST accel_copy_crc32c_C2 00:07:54.295 ************************************ 00:07:54.295 21:07:05 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:54.295 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:54.295 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:54.295 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.295 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.295 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:54.295 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:54.295 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.295 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:54.295 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:54.295 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.295 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.295 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:54.295 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:54.295 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:54.295 [2024-07-14 21:07:05.489791] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:54.295 [2024-07-14 21:07:05.489966] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64979 ] 00:07:54.295 [2024-07-14 21:07:05.647097] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.295 [2024-07-14 21:07:05.803221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.554 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.555 21:07:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.458 00:07:56.458 real 0m2.207s 00:07:56.458 user 0m1.984s 00:07:56.458 sys 0m0.132s 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.458 21:07:07 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:56.458 ************************************ 00:07:56.458 END TEST accel_copy_crc32c_C2 00:07:56.458 ************************************ 00:07:56.458 21:07:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:56.458 21:07:07 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:56.458 21:07:07 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:56.458 21:07:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.458 21:07:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:56.458 ************************************ 00:07:56.458 START TEST accel_dualcast 00:07:56.458 ************************************ 00:07:56.458 21:07:07 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:56.458 21:07:07 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:56.458 21:07:07 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:56.458 21:07:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.458 21:07:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.458 21:07:07 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:56.458 21:07:07 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:56.458 21:07:07 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:56.458 21:07:07 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:56.458 21:07:07 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:56.458 21:07:07 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.458 21:07:07 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.458 21:07:07 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:56.458 21:07:07 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:56.458 21:07:07 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:56.458 [2024-07-14 21:07:07.763704] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:56.458 [2024-07-14 21:07:07.763994] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65024 ] 00:07:56.458 [2024-07-14 21:07:07.934724] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.717 [2024-07-14 21:07:08.088528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.717 21:07:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:58.619 21:07:09 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:58.619 00:07:58.619 real 0m2.277s 00:07:58.619 user 0m2.042s 00:07:58.619 sys 0m0.142s 00:07:58.619 21:07:09 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.619 21:07:09 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:58.619 ************************************ 00:07:58.619 END TEST accel_dualcast 00:07:58.619 ************************************ 00:07:58.619 21:07:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:58.619 21:07:10 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:58.619 21:07:10 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:58.619 21:07:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.619 21:07:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:58.619 ************************************ 00:07:58.619 START TEST accel_compare 00:07:58.619 ************************************ 00:07:58.619 21:07:10 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:58.619 21:07:10 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:58.619 21:07:10 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:58.619 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.619 21:07:10 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:58.619 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.619 21:07:10 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:58.619 21:07:10 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:58.619 21:07:10 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:58.619 21:07:10 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:58.619 21:07:10 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.619 21:07:10 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.619 21:07:10 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:58.619 21:07:10 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:58.619 21:07:10 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:58.619 [2024-07-14 21:07:10.084823] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:58.619 [2024-07-14 21:07:10.085017] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65072 ] 00:07:58.895 [2024-07-14 21:07:10.256690] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.155 [2024-07-14 21:07:10.451783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.155 21:07:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.059 21:07:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:01.059 21:07:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.059 21:07:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.059 21:07:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.059 21:07:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:01.060 21:07:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.060 21:07:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.060 21:07:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.060 21:07:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:01.060 21:07:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.060 21:07:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.060 21:07:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.060 21:07:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:01.060 21:07:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.060 21:07:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.060 21:07:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.060 21:07:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:01.060 21:07:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.060 21:07:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.060 21:07:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.060 21:07:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:01.060 21:07:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.060 21:07:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.060 21:07:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.060 21:07:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:01.060 ************************************ 00:08:01.060 END TEST accel_compare 00:08:01.060 ************************************ 00:08:01.060 21:07:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:01.060 21:07:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:01.060 00:08:01.060 real 0m2.297s 00:08:01.060 user 0m2.053s 00:08:01.060 sys 0m0.152s 00:08:01.060 21:07:12 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.060 21:07:12 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:01.060 21:07:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:01.060 21:07:12 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:01.060 21:07:12 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:01.060 21:07:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.060 21:07:12 accel -- common/autotest_common.sh@10 -- # set +x 00:08:01.060 ************************************ 00:08:01.060 START TEST accel_xor 00:08:01.060 ************************************ 00:08:01.060 21:07:12 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:08:01.060 21:07:12 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:01.060 21:07:12 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:01.060 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.060 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.060 21:07:12 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:01.060 21:07:12 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:01.060 21:07:12 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:01.060 21:07:12 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:01.060 21:07:12 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:01.060 21:07:12 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.060 21:07:12 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.060 21:07:12 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:01.060 21:07:12 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:01.060 21:07:12 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:01.060 [2024-07-14 21:07:12.434430] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:01.060 [2024-07-14 21:07:12.434588] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65113 ] 00:08:01.060 [2024-07-14 21:07:12.585276] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.319 [2024-07-14 21:07:12.733686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.578 21:07:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.579 21:07:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:03.482 ************************************ 00:08:03.482 END TEST accel_xor 00:08:03.482 ************************************ 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:03.482 00:08:03.482 real 0m2.233s 00:08:03.482 user 0m2.019s 00:08:03.482 sys 0m0.121s 00:08:03.482 21:07:14 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.482 21:07:14 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:03.482 21:07:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:03.482 21:07:14 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:03.482 21:07:14 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:03.482 21:07:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.482 21:07:14 accel -- common/autotest_common.sh@10 -- # set +x 00:08:03.482 ************************************ 00:08:03.482 START TEST accel_xor 00:08:03.482 ************************************ 00:08:03.482 21:07:14 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:03.482 21:07:14 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:03.483 21:07:14 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.483 21:07:14 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.483 21:07:14 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.483 21:07:14 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.483 21:07:14 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:03.483 21:07:14 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:03.483 21:07:14 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:03.483 [2024-07-14 21:07:14.731786] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:03.483 [2024-07-14 21:07:14.731980] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65154 ] 00:08:03.483 [2024-07-14 21:07:14.900292] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.742 [2024-07-14 21:07:15.060436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.742 21:07:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:05.648 21:07:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:05.648 00:08:05.648 real 0m2.257s 00:08:05.648 user 0m2.012s 00:08:05.648 sys 0m0.152s 00:08:05.648 21:07:16 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.648 ************************************ 00:08:05.648 END TEST accel_xor 00:08:05.648 ************************************ 00:08:05.648 21:07:16 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:05.648 21:07:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:05.648 21:07:16 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:05.648 21:07:16 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:05.648 21:07:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.648 21:07:16 accel -- common/autotest_common.sh@10 -- # set +x 00:08:05.648 ************************************ 00:08:05.648 START TEST accel_dif_verify 00:08:05.648 ************************************ 00:08:05.648 21:07:16 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:08:05.648 21:07:16 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:05.648 21:07:16 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:05.648 21:07:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.648 21:07:16 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:05.648 21:07:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.648 21:07:16 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:05.648 21:07:16 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:05.648 21:07:16 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:05.648 21:07:16 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:05.648 21:07:16 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.648 21:07:16 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.648 21:07:16 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:05.648 21:07:16 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:05.648 21:07:16 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:05.648 [2024-07-14 21:07:17.044436] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:05.648 [2024-07-14 21:07:17.044622] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65195 ] 00:08:05.906 [2024-07-14 21:07:17.215968] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.906 [2024-07-14 21:07:17.401984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.165 21:07:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:08.068 21:07:19 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:08.068 00:08:08.068 real 0m2.308s 00:08:08.068 user 0m2.047s 00:08:08.068 sys 0m0.170s 00:08:08.068 ************************************ 00:08:08.068 END TEST accel_dif_verify 00:08:08.068 ************************************ 00:08:08.068 21:07:19 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.068 21:07:19 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:08.068 21:07:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:08.068 21:07:19 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:08.068 21:07:19 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:08.068 21:07:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.068 21:07:19 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.068 ************************************ 00:08:08.068 START TEST accel_dif_generate 00:08:08.068 ************************************ 00:08:08.069 21:07:19 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:08:08.069 21:07:19 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:08.069 21:07:19 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:08.069 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.069 21:07:19 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:08.069 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.069 21:07:19 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:08.069 21:07:19 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:08.069 21:07:19 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.069 21:07:19 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.069 21:07:19 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.069 21:07:19 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.069 21:07:19 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.069 21:07:19 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:08.069 21:07:19 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:08.069 [2024-07-14 21:07:19.406507] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:08.069 [2024-07-14 21:07:19.406681] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65243 ] 00:08:08.069 [2024-07-14 21:07:19.575665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.328 [2024-07-14 21:07:19.724994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.586 21:07:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.586 21:07:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.586 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.586 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.586 21:07:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.586 21:07:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.586 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.586 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.587 21:07:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.498 21:07:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:10.498 21:07:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:10.498 21:07:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:10.498 21:07:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.498 21:07:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:10.498 21:07:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:10.498 21:07:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:10.498 21:07:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.498 21:07:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:10.498 21:07:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:10.498 21:07:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:10.498 21:07:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.498 21:07:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:10.498 21:07:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:10.498 21:07:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:10.498 21:07:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.498 21:07:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:10.498 21:07:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:10.498 21:07:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:10.498 21:07:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.498 21:07:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:10.499 21:07:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:10.499 21:07:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:10.499 21:07:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.499 21:07:21 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:10.499 ************************************ 00:08:10.499 END TEST accel_dif_generate 00:08:10.499 ************************************ 00:08:10.499 21:07:21 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:10.499 21:07:21 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:10.499 00:08:10.499 real 0m2.253s 00:08:10.499 user 0m2.010s 00:08:10.499 sys 0m0.152s 00:08:10.499 21:07:21 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.499 21:07:21 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:10.499 21:07:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:10.499 21:07:21 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:10.499 21:07:21 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:10.499 21:07:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.499 21:07:21 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.499 ************************************ 00:08:10.499 START TEST accel_dif_generate_copy 00:08:10.499 ************************************ 00:08:10.499 21:07:21 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:08:10.499 21:07:21 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:10.499 21:07:21 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:10.499 21:07:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.499 21:07:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.499 21:07:21 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:10.499 21:07:21 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:10.499 21:07:21 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:10.499 21:07:21 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:10.499 21:07:21 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:10.499 21:07:21 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.499 21:07:21 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.499 21:07:21 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:10.499 21:07:21 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:10.499 21:07:21 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:10.499 [2024-07-14 21:07:21.714874] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:10.499 [2024-07-14 21:07:21.715059] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65288 ] 00:08:10.499 [2024-07-14 21:07:21.883022] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.768 [2024-07-14 21:07:22.051479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.768 21:07:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:12.674 ************************************ 00:08:12.674 END TEST accel_dif_generate_copy 00:08:12.674 ************************************ 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:12.674 00:08:12.674 real 0m2.269s 00:08:12.674 user 0m2.022s 00:08:12.674 sys 0m0.154s 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.674 21:07:23 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:12.674 21:07:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:12.674 21:07:23 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:12.674 21:07:23 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:12.674 21:07:23 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:12.674 21:07:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.674 21:07:23 accel -- common/autotest_common.sh@10 -- # set +x 00:08:12.674 ************************************ 00:08:12.674 START TEST accel_comp 00:08:12.674 ************************************ 00:08:12.674 21:07:23 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:12.674 21:07:23 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:12.674 21:07:23 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:12.674 21:07:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:12.674 21:07:23 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:12.674 21:07:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:12.674 21:07:23 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:12.674 21:07:23 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:12.674 21:07:23 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:12.674 21:07:23 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:12.674 21:07:23 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.674 21:07:23 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.674 21:07:23 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:12.674 21:07:23 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:12.674 21:07:23 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:12.674 [2024-07-14 21:07:24.030727] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:12.674 [2024-07-14 21:07:24.030916] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65329 ] 00:08:12.674 [2024-07-14 21:07:24.201219] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.934 [2024-07-14 21:07:24.362082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.193 21:07:24 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.194 21:07:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:15.097 21:07:26 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:15.097 00:08:15.097 real 0m2.241s 00:08:15.097 user 0m2.010s 00:08:15.097 sys 0m0.139s 00:08:15.097 21:07:26 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.097 ************************************ 00:08:15.097 END TEST accel_comp 00:08:15.097 ************************************ 00:08:15.097 21:07:26 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:15.097 21:07:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:15.097 21:07:26 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:15.097 21:07:26 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:15.097 21:07:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.097 21:07:26 accel -- common/autotest_common.sh@10 -- # set +x 00:08:15.097 ************************************ 00:08:15.097 START TEST accel_decomp 00:08:15.097 ************************************ 00:08:15.097 21:07:26 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:15.097 21:07:26 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:15.097 21:07:26 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:15.097 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.097 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.097 21:07:26 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:15.097 21:07:26 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:15.097 21:07:26 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:15.097 21:07:26 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:15.097 21:07:26 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:15.097 21:07:26 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.097 21:07:26 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.097 21:07:26 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:15.098 21:07:26 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:15.098 21:07:26 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:15.098 [2024-07-14 21:07:26.323150] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:15.098 [2024-07-14 21:07:26.323306] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65370 ] 00:08:15.098 [2024-07-14 21:07:26.488862] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.098 [2024-07-14 21:07:26.637215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.356 21:07:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:15.356 21:07:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.356 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.356 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.356 21:07:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:15.356 21:07:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.356 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.356 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.356 21:07:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:15.356 21:07:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.356 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.356 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.356 21:07:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:15.356 21:07:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.356 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.356 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.357 21:07:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:17.260 21:07:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:17.260 21:07:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:17.260 21:07:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:17.260 21:07:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:17.260 21:07:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:17.260 21:07:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:17.260 21:07:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:17.260 21:07:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:17.260 21:07:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:17.260 21:07:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:17.260 21:07:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:17.260 21:07:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:17.260 21:07:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:17.260 21:07:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:17.260 21:07:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:17.261 21:07:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:17.261 21:07:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:17.261 21:07:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:17.261 21:07:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:17.261 21:07:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:17.261 21:07:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:17.261 21:07:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:17.261 21:07:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:17.261 21:07:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:17.261 21:07:28 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:17.261 21:07:28 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:17.261 21:07:28 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:17.261 00:08:17.261 real 0m2.252s 00:08:17.261 user 0m2.015s 00:08:17.261 sys 0m0.142s 00:08:17.261 21:07:28 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.261 ************************************ 00:08:17.261 END TEST accel_decomp 00:08:17.261 ************************************ 00:08:17.261 21:07:28 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:17.261 21:07:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:17.261 21:07:28 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:17.261 21:07:28 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:17.261 21:07:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.261 21:07:28 accel -- common/autotest_common.sh@10 -- # set +x 00:08:17.261 ************************************ 00:08:17.261 START TEST accel_decomp_full 00:08:17.261 ************************************ 00:08:17.261 21:07:28 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:17.261 21:07:28 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:17.261 21:07:28 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:17.261 21:07:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:17.261 21:07:28 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:17.261 21:07:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:17.261 21:07:28 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:17.261 21:07:28 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:17.261 21:07:28 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:17.261 21:07:28 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:17.261 21:07:28 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.261 21:07:28 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.261 21:07:28 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:17.261 21:07:28 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:17.261 21:07:28 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:17.261 [2024-07-14 21:07:28.628941] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:17.261 [2024-07-14 21:07:28.629113] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65417 ] 00:08:17.261 [2024-07-14 21:07:28.797999] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.520 [2024-07-14 21:07:28.961287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:17.780 21:07:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:19.679 21:07:30 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.679 00:08:19.679 real 0m2.271s 00:08:19.679 user 0m2.035s 00:08:19.679 sys 0m0.144s 00:08:19.679 21:07:30 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.679 21:07:30 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:19.679 ************************************ 00:08:19.679 END TEST accel_decomp_full 00:08:19.679 ************************************ 00:08:19.679 21:07:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:19.679 21:07:30 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:19.679 21:07:30 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:19.679 21:07:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.679 21:07:30 accel -- common/autotest_common.sh@10 -- # set +x 00:08:19.679 ************************************ 00:08:19.679 START TEST accel_decomp_mcore 00:08:19.679 ************************************ 00:08:19.679 21:07:30 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:19.679 21:07:30 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:19.679 21:07:30 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:19.679 21:07:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.679 21:07:30 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:19.679 21:07:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.679 21:07:30 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:19.679 21:07:30 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:19.679 21:07:30 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:19.679 21:07:30 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:19.679 21:07:30 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:19.679 21:07:30 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:19.679 21:07:30 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:19.679 21:07:30 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:19.679 21:07:30 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:19.679 [2024-07-14 21:07:30.936643] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:19.679 [2024-07-14 21:07:30.936840] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65458 ] 00:08:19.680 [2024-07-14 21:07:31.094494] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.936 [2024-07-14 21:07:31.253868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.936 [2024-07-14 21:07:31.253977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.936 [2024-07-14 21:07:31.254099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.937 [2024-07-14 21:07:31.254116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.937 21:07:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:21.850 00:08:21.850 real 0m2.287s 00:08:21.850 user 0m0.019s 00:08:21.850 sys 0m0.002s 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.850 ************************************ 00:08:21.850 END TEST accel_decomp_mcore 00:08:21.850 ************************************ 00:08:21.850 21:07:33 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:21.850 21:07:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:21.850 21:07:33 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:21.850 21:07:33 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:21.850 21:07:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.850 21:07:33 accel -- common/autotest_common.sh@10 -- # set +x 00:08:21.850 ************************************ 00:08:21.850 START TEST accel_decomp_full_mcore 00:08:21.850 ************************************ 00:08:21.850 21:07:33 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:21.850 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:21.850 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:21.850 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.850 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:21.850 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.850 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:21.850 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:21.850 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:21.850 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:21.850 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.850 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.850 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:21.850 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:21.850 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:21.850 [2024-07-14 21:07:33.281548] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:21.850 [2024-07-14 21:07:33.281771] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65506 ] 00:08:22.116 [2024-07-14 21:07:33.442578] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:22.116 [2024-07-14 21:07:33.598161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.116 [2024-07-14 21:07:33.598262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.116 [2024-07-14 21:07:33.598755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.116 [2024-07-14 21:07:33.598758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.375 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:22.375 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.375 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.375 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.375 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:22.375 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.376 21:07:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:24.281 00:08:24.281 real 0m2.346s 00:08:24.281 user 0m0.024s 00:08:24.281 sys 0m0.001s 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.281 21:07:35 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:24.281 ************************************ 00:08:24.281 END TEST accel_decomp_full_mcore 00:08:24.281 ************************************ 00:08:24.281 21:07:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:24.281 21:07:35 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:24.281 21:07:35 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:24.281 21:07:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.281 21:07:35 accel -- common/autotest_common.sh@10 -- # set +x 00:08:24.281 ************************************ 00:08:24.281 START TEST accel_decomp_mthread 00:08:24.281 ************************************ 00:08:24.281 21:07:35 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:24.281 21:07:35 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:24.281 21:07:35 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:24.281 21:07:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:24.281 21:07:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:24.281 21:07:35 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:24.281 21:07:35 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:24.281 21:07:35 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:24.281 21:07:35 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:24.281 21:07:35 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:24.281 21:07:35 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:24.281 21:07:35 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:24.281 21:07:35 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:24.281 21:07:35 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:24.281 21:07:35 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:24.281 [2024-07-14 21:07:35.675909] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:24.281 [2024-07-14 21:07:35.676067] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65557 ] 00:08:24.540 [2024-07-14 21:07:35.833518] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.540 [2024-07-14 21:07:35.998108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:24.799 21:07:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.703 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:26.704 00:08:26.704 real 0m2.244s 00:08:26.704 user 0m2.014s 00:08:26.704 sys 0m0.139s 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.704 21:07:37 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:26.704 ************************************ 00:08:26.704 END TEST accel_decomp_mthread 00:08:26.704 ************************************ 00:08:26.704 21:07:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:26.704 21:07:37 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:26.704 21:07:37 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:26.704 21:07:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.704 21:07:37 accel -- common/autotest_common.sh@10 -- # set +x 00:08:26.704 ************************************ 00:08:26.704 START TEST accel_decomp_full_mthread 00:08:26.704 ************************************ 00:08:26.704 21:07:37 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:26.704 21:07:37 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:26.704 21:07:37 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:26.704 21:07:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.704 21:07:37 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:26.704 21:07:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.704 21:07:37 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:26.704 21:07:37 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:26.704 21:07:37 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:26.704 21:07:37 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:26.704 21:07:37 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:26.704 21:07:37 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:26.704 21:07:37 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:26.704 21:07:37 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:26.704 21:07:37 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:26.704 [2024-07-14 21:07:37.982272] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:26.704 [2024-07-14 21:07:37.982460] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65598 ] 00:08:26.704 [2024-07-14 21:07:38.154254] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.964 [2024-07-14 21:07:38.310459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.964 21:07:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.899 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:28.899 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.899 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.899 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.899 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:28.899 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.899 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.899 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.900 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:28.900 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.900 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.900 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.900 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:28.900 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.900 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.900 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.900 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:28.900 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.900 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.900 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.900 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:28.900 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.900 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.900 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.900 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:28.900 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.900 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.900 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.900 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:28.900 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:28.900 21:07:40 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:28.900 00:08:28.900 real 0m2.310s 00:08:28.900 user 0m2.075s 00:08:28.900 sys 0m0.150s 00:08:28.900 21:07:40 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.900 21:07:40 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:28.900 ************************************ 00:08:28.900 END TEST accel_decomp_full_mthread 00:08:28.900 ************************************ 00:08:28.900 21:07:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:28.900 21:07:40 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:28.900 21:07:40 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:28.900 21:07:40 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:28.900 21:07:40 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:28.900 21:07:40 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:28.900 21:07:40 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:28.900 21:07:40 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:28.900 21:07:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.900 21:07:40 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:28.900 21:07:40 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:28.900 21:07:40 accel -- common/autotest_common.sh@10 -- # set +x 00:08:28.900 21:07:40 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:28.900 21:07:40 accel -- accel/accel.sh@41 -- # jq -r . 00:08:28.900 ************************************ 00:08:28.900 START TEST accel_dif_functional_tests 00:08:28.900 ************************************ 00:08:28.900 21:07:40 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:28.900 [2024-07-14 21:07:40.370658] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:28.900 [2024-07-14 21:07:40.370823] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65640 ] 00:08:29.159 [2024-07-14 21:07:40.531382] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:29.159 [2024-07-14 21:07:40.698150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.159 [2024-07-14 21:07:40.698323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.159 [2024-07-14 21:07:40.698334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.418 00:08:29.418 00:08:29.418 CUnit - A unit testing framework for C - Version 2.1-3 00:08:29.418 http://cunit.sourceforge.net/ 00:08:29.418 00:08:29.418 00:08:29.418 Suite: accel_dif 00:08:29.418 Test: verify: DIF generated, GUARD check ...passed 00:08:29.418 Test: verify: DIF generated, APPTAG check ...passed 00:08:29.418 Test: verify: DIF generated, REFTAG check ...passed 00:08:29.418 Test: verify: DIF not generated, GUARD check ...[2024-07-14 21:07:40.938398] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:29.418 passed 00:08:29.418 Test: verify: DIF not generated, APPTAG check ...passed 00:08:29.418 Test: verify: DIF not generated, REFTAG check ...passed 00:08:29.418 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:29.418 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:08:29.418 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-07-14 21:07:40.938546] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:29.418 [2024-07-14 21:07:40.938596] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:29.418 [2024-07-14 21:07:40.938692] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:29.418 passed 00:08:29.418 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:29.418 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:29.418 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:08:29.418 Test: verify copy: DIF generated, GUARD check ...passed 00:08:29.418 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:29.418 Test: verify copy: DIF generated, REFTAG check ...[2024-07-14 21:07:40.938921] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:29.418 passed 00:08:29.418 Test: verify copy: DIF not generated, GUARD check ...passed 00:08:29.418 Test: verify copy: DIF not generated, APPTAG check ...passed 00:08:29.418 Test: verify copy: DIF not generated, REFTAG check ...passed 00:08:29.418 Test: generate copy: DIF generated, GUARD check ...[2024-07-14 21:07:40.939187] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:29.418 [2024-07-14 21:07:40.939277] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:29.418 [2024-07-14 21:07:40.939344] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:29.418 passed 00:08:29.418 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:29.418 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:29.418 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:29.418 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:29.418 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:29.418 Test: generate copy: iovecs-len validate ...passed 00:08:29.418 Test: generate copy: buffer alignment validate ...passed 00:08:29.418 00:08:29.418 Run Summary: Type Total Ran Passed Failed Inactive 00:08:29.418 suites 1 1 n/a 0 0 00:08:29.418 tests 26 26 26 0 0 00:08:29.418 asserts 115 115 115 0 n/a 00:08:29.418 00:08:29.418 Elapsed time = 0.005 seconds 00:08:29.418 [2024-07-14 21:07:40.939754] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:30.798 00:08:30.798 real 0m1.623s 00:08:30.798 user 0m3.046s 00:08:30.798 sys 0m0.213s 00:08:30.798 21:07:41 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.798 21:07:41 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:30.798 ************************************ 00:08:30.798 END TEST accel_dif_functional_tests 00:08:30.798 ************************************ 00:08:30.798 21:07:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:30.798 00:08:30.798 real 0m54.180s 00:08:30.798 user 0m59.289s 00:08:30.798 sys 0m4.743s 00:08:30.798 21:07:41 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.798 21:07:41 accel -- common/autotest_common.sh@10 -- # set +x 00:08:30.798 ************************************ 00:08:30.798 END TEST accel 00:08:30.798 ************************************ 00:08:30.798 21:07:42 -- common/autotest_common.sh@1142 -- # return 0 00:08:30.798 21:07:42 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:30.798 21:07:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:30.798 21:07:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.798 21:07:42 -- common/autotest_common.sh@10 -- # set +x 00:08:30.798 ************************************ 00:08:30.798 START TEST accel_rpc 00:08:30.798 ************************************ 00:08:30.798 21:07:42 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:30.798 * Looking for test storage... 00:08:30.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:30.798 21:07:42 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:30.798 21:07:42 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=65722 00:08:30.798 21:07:42 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 65722 00:08:30.798 21:07:42 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:30.798 21:07:42 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 65722 ']' 00:08:30.798 21:07:42 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.798 21:07:42 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:30.798 21:07:42 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.798 21:07:42 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:30.798 21:07:42 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.798 [2024-07-14 21:07:42.204519] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:30.798 [2024-07-14 21:07:42.204762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65722 ] 00:08:31.057 [2024-07-14 21:07:42.370939] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.057 [2024-07-14 21:07:42.532139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.626 21:07:43 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:31.626 21:07:43 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:31.626 21:07:43 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:31.626 21:07:43 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:31.626 21:07:43 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:31.626 21:07:43 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:31.626 21:07:43 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:31.626 21:07:43 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:31.626 21:07:43 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.626 21:07:43 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.626 ************************************ 00:08:31.626 START TEST accel_assign_opcode 00:08:31.626 ************************************ 00:08:31.626 21:07:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:08:31.626 21:07:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:31.626 21:07:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.626 21:07:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:31.626 [2024-07-14 21:07:43.137221] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:31.626 21:07:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.626 21:07:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:31.626 21:07:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.626 21:07:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:31.626 [2024-07-14 21:07:43.145200] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:31.626 21:07:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.626 21:07:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:31.626 21:07:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.626 21:07:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:32.559 21:07:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.559 21:07:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:32.559 21:07:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:32.559 21:07:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.559 21:07:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:32.559 21:07:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:32.559 21:07:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.559 software 00:08:32.559 ************************************ 00:08:32.559 END TEST accel_assign_opcode 00:08:32.559 ************************************ 00:08:32.559 00:08:32.559 real 0m0.667s 00:08:32.559 user 0m0.055s 00:08:32.559 sys 0m0.010s 00:08:32.559 21:07:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.559 21:07:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:32.559 21:07:43 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:32.559 21:07:43 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 65722 00:08:32.559 21:07:43 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 65722 ']' 00:08:32.559 21:07:43 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 65722 00:08:32.559 21:07:43 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:08:32.559 21:07:43 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:32.559 21:07:43 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65722 00:08:32.559 killing process with pid 65722 00:08:32.559 21:07:43 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:32.559 21:07:43 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:32.559 21:07:43 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65722' 00:08:32.559 21:07:43 accel_rpc -- common/autotest_common.sh@967 -- # kill 65722 00:08:32.559 21:07:43 accel_rpc -- common/autotest_common.sh@972 -- # wait 65722 00:08:34.461 00:08:34.461 real 0m3.603s 00:08:34.461 user 0m3.710s 00:08:34.461 sys 0m0.415s 00:08:34.461 21:07:45 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:34.461 ************************************ 00:08:34.461 END TEST accel_rpc 00:08:34.461 ************************************ 00:08:34.461 21:07:45 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.461 21:07:45 -- common/autotest_common.sh@1142 -- # return 0 00:08:34.461 21:07:45 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:34.461 21:07:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:34.461 21:07:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.461 21:07:45 -- common/autotest_common.sh@10 -- # set +x 00:08:34.461 ************************************ 00:08:34.461 START TEST app_cmdline 00:08:34.461 ************************************ 00:08:34.461 21:07:45 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:34.461 * Looking for test storage... 00:08:34.461 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:34.461 21:07:45 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:34.461 21:07:45 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=65833 00:08:34.461 21:07:45 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:34.461 21:07:45 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 65833 00:08:34.461 21:07:45 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 65833 ']' 00:08:34.461 21:07:45 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.461 21:07:45 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:34.461 21:07:45 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.461 21:07:45 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:34.461 21:07:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:34.461 [2024-07-14 21:07:45.844182] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:34.461 [2024-07-14 21:07:45.844326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65833 ] 00:08:34.461 [2024-07-14 21:07:45.998392] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.719 [2024-07-14 21:07:46.152396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.286 21:07:46 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:35.286 21:07:46 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:35.286 21:07:46 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:35.545 { 00:08:35.545 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:08:35.545 "fields": { 00:08:35.545 "major": 24, 00:08:35.545 "minor": 9, 00:08:35.545 "patch": 0, 00:08:35.545 "suffix": "-pre", 00:08:35.545 "commit": "719d03c6a" 00:08:35.545 } 00:08:35.545 } 00:08:35.545 21:07:46 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:35.545 21:07:46 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:35.545 21:07:46 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:35.545 21:07:46 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:35.545 21:07:46 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:35.545 21:07:46 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:35.545 21:07:46 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.545 21:07:46 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:35.545 21:07:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:35.545 21:07:46 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.545 21:07:47 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:35.545 21:07:47 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:35.545 21:07:47 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:35.545 21:07:47 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:35.545 21:07:47 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:35.545 21:07:47 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:35.545 21:07:47 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:35.545 21:07:47 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:35.545 21:07:47 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:35.545 21:07:47 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:35.545 21:07:47 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:35.545 21:07:47 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:35.545 21:07:47 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:35.545 21:07:47 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:35.804 request: 00:08:35.804 { 00:08:35.804 "method": "env_dpdk_get_mem_stats", 00:08:35.804 "req_id": 1 00:08:35.804 } 00:08:35.804 Got JSON-RPC error response 00:08:35.804 response: 00:08:35.804 { 00:08:35.804 "code": -32601, 00:08:35.804 "message": "Method not found" 00:08:35.804 } 00:08:35.804 21:07:47 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:35.804 21:07:47 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:35.804 21:07:47 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:35.804 21:07:47 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:35.804 21:07:47 app_cmdline -- app/cmdline.sh@1 -- # killprocess 65833 00:08:35.804 21:07:47 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 65833 ']' 00:08:35.804 21:07:47 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 65833 00:08:35.804 21:07:47 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:35.804 21:07:47 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:35.804 21:07:47 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65833 00:08:36.062 21:07:47 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:36.062 21:07:47 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:36.062 killing process with pid 65833 00:08:36.062 21:07:47 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65833' 00:08:36.062 21:07:47 app_cmdline -- common/autotest_common.sh@967 -- # kill 65833 00:08:36.062 21:07:47 app_cmdline -- common/autotest_common.sh@972 -- # wait 65833 00:08:37.962 00:08:37.962 real 0m3.439s 00:08:37.962 user 0m3.933s 00:08:37.962 sys 0m0.464s 00:08:37.962 21:07:49 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:37.962 21:07:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:37.962 ************************************ 00:08:37.962 END TEST app_cmdline 00:08:37.962 ************************************ 00:08:37.962 21:07:49 -- common/autotest_common.sh@1142 -- # return 0 00:08:37.962 21:07:49 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:37.962 21:07:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:37.962 21:07:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.962 21:07:49 -- common/autotest_common.sh@10 -- # set +x 00:08:37.962 ************************************ 00:08:37.962 START TEST version 00:08:37.962 ************************************ 00:08:37.962 21:07:49 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:37.962 * Looking for test storage... 00:08:37.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:37.962 21:07:49 version -- app/version.sh@17 -- # get_header_version major 00:08:37.962 21:07:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:37.962 21:07:49 version -- app/version.sh@14 -- # cut -f2 00:08:37.962 21:07:49 version -- app/version.sh@14 -- # tr -d '"' 00:08:37.962 21:07:49 version -- app/version.sh@17 -- # major=24 00:08:37.962 21:07:49 version -- app/version.sh@18 -- # get_header_version minor 00:08:37.962 21:07:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:37.962 21:07:49 version -- app/version.sh@14 -- # cut -f2 00:08:37.962 21:07:49 version -- app/version.sh@14 -- # tr -d '"' 00:08:37.962 21:07:49 version -- app/version.sh@18 -- # minor=9 00:08:37.962 21:07:49 version -- app/version.sh@19 -- # get_header_version patch 00:08:37.962 21:07:49 version -- app/version.sh@14 -- # cut -f2 00:08:37.962 21:07:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:37.962 21:07:49 version -- app/version.sh@14 -- # tr -d '"' 00:08:37.962 21:07:49 version -- app/version.sh@19 -- # patch=0 00:08:37.962 21:07:49 version -- app/version.sh@20 -- # get_header_version suffix 00:08:37.962 21:07:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:37.962 21:07:49 version -- app/version.sh@14 -- # tr -d '"' 00:08:37.962 21:07:49 version -- app/version.sh@14 -- # cut -f2 00:08:37.962 21:07:49 version -- app/version.sh@20 -- # suffix=-pre 00:08:37.962 21:07:49 version -- app/version.sh@22 -- # version=24.9 00:08:37.962 21:07:49 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:37.962 21:07:49 version -- app/version.sh@28 -- # version=24.9rc0 00:08:37.962 21:07:49 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:37.962 21:07:49 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:37.962 21:07:49 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:37.962 21:07:49 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:37.962 00:08:37.962 real 0m0.150s 00:08:37.962 user 0m0.084s 00:08:37.962 sys 0m0.095s 00:08:37.962 21:07:49 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:37.962 21:07:49 version -- common/autotest_common.sh@10 -- # set +x 00:08:37.962 ************************************ 00:08:37.962 END TEST version 00:08:37.962 ************************************ 00:08:37.962 21:07:49 -- common/autotest_common.sh@1142 -- # return 0 00:08:37.962 21:07:49 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:37.963 21:07:49 -- spdk/autotest.sh@198 -- # uname -s 00:08:37.963 21:07:49 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:37.963 21:07:49 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:37.963 21:07:49 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:37.963 21:07:49 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:08:37.963 21:07:49 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:37.963 21:07:49 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:37.963 21:07:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.963 21:07:49 -- common/autotest_common.sh@10 -- # set +x 00:08:37.963 ************************************ 00:08:37.963 START TEST blockdev_nvme 00:08:37.963 ************************************ 00:08:37.963 21:07:49 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:37.963 * Looking for test storage... 00:08:37.963 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:37.963 21:07:49 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=65994 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 65994 00:08:37.963 21:07:49 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 65994 ']' 00:08:37.963 21:07:49 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.963 21:07:49 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:37.963 21:07:49 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:37.963 21:07:49 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.963 21:07:49 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:37.963 21:07:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.221 [2024-07-14 21:07:49.584146] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:38.221 [2024-07-14 21:07:49.584327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65994 ] 00:08:38.221 [2024-07-14 21:07:49.756110] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.500 [2024-07-14 21:07:49.918645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.066 21:07:50 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:39.066 21:07:50 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:08:39.066 21:07:50 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:08:39.066 21:07:50 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:08:39.066 21:07:50 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:08:39.066 21:07:50 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:39.066 21:07:50 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:39.325 21:07:50 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:39.325 21:07:50 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.325 21:07:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:39.585 21:07:50 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.585 21:07:50 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:08:39.585 21:07:50 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.585 21:07:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:39.585 21:07:50 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.585 21:07:50 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:08:39.585 21:07:50 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:08:39.585 21:07:50 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.585 21:07:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:39.585 21:07:50 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.585 21:07:50 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:08:39.585 21:07:50 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.585 21:07:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:39.585 21:07:50 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.585 21:07:50 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:39.585 21:07:50 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.585 21:07:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:39.585 21:07:50 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.585 21:07:50 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:08:39.585 21:07:50 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:08:39.585 21:07:50 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.585 21:07:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:39.585 21:07:50 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:08:39.585 21:07:51 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.585 21:07:51 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:08:39.585 21:07:51 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:08:39.586 21:07:51 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "fefb93ac-c238-456d-ad1f-33624eb36dba"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "fefb93ac-c238-456d-ad1f-33624eb36dba",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "d045f4a2-d762-4c6c-82b9-6dc31598b92c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d045f4a2-d762-4c6c-82b9-6dc31598b92c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "c8271dd8-cfff-4455-9ce7-e9cc939c9980"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c8271dd8-cfff-4455-9ce7-e9cc939c9980",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "731fd967-01ca-4860-ada7-70217e60392c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "731fd967-01ca-4860-ada7-70217e60392c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "74c3feeb-f4cf-4a26-b098-21f3dfd34098"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "74c3feeb-f4cf-4a26-b098-21f3dfd34098",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "63c29c1f-7bad-4a99-bd2b-ba7699ae943a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "63c29c1f-7bad-4a99-bd2b-ba7699ae943a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:39.586 21:07:51 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:08:39.586 21:07:51 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:08:39.586 21:07:51 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:08:39.586 21:07:51 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 65994 00:08:39.586 21:07:51 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 65994 ']' 00:08:39.586 21:07:51 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 65994 00:08:39.586 21:07:51 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:08:39.586 21:07:51 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:39.586 21:07:51 blockdev_nvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65994 00:08:39.845 21:07:51 blockdev_nvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:39.845 killing process with pid 65994 00:08:39.845 21:07:51 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:39.845 21:07:51 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65994' 00:08:39.845 21:07:51 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 65994 00:08:39.845 21:07:51 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 65994 00:08:41.749 21:07:52 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:41.749 21:07:52 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:41.749 21:07:52 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:41.749 21:07:52 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.749 21:07:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:41.749 ************************************ 00:08:41.749 START TEST bdev_hello_world 00:08:41.749 ************************************ 00:08:41.749 21:07:52 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:41.749 [2024-07-14 21:07:53.015395] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:41.749 [2024-07-14 21:07:53.015576] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66084 ] 00:08:41.749 [2024-07-14 21:07:53.188414] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.007 [2024-07-14 21:07:53.337345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.574 [2024-07-14 21:07:53.904127] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:42.574 [2024-07-14 21:07:53.904188] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:42.574 [2024-07-14 21:07:53.904213] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:42.574 [2024-07-14 21:07:53.906876] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:42.574 [2024-07-14 21:07:53.907468] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:42.574 [2024-07-14 21:07:53.907507] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:42.574 [2024-07-14 21:07:53.907778] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:42.574 00:08:42.574 [2024-07-14 21:07:53.907832] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:43.510 00:08:43.510 real 0m1.926s 00:08:43.510 user 0m1.605s 00:08:43.510 sys 0m0.213s 00:08:43.510 21:07:54 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.510 21:07:54 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:43.510 ************************************ 00:08:43.510 END TEST bdev_hello_world 00:08:43.510 ************************************ 00:08:43.510 21:07:54 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:43.510 21:07:54 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:08:43.510 21:07:54 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:43.510 21:07:54 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.510 21:07:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:43.510 ************************************ 00:08:43.510 START TEST bdev_bounds 00:08:43.510 ************************************ 00:08:43.510 21:07:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:08:43.510 21:07:54 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=66126 00:08:43.510 21:07:54 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:43.510 21:07:54 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:43.510 Process bdevio pid: 66126 00:08:43.510 21:07:54 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 66126' 00:08:43.510 21:07:54 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 66126 00:08:43.510 21:07:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 66126 ']' 00:08:43.510 21:07:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.510 21:07:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:43.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.510 21:07:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.510 21:07:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:43.510 21:07:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:43.510 [2024-07-14 21:07:54.971287] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:43.510 [2024-07-14 21:07:54.971490] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66126 ] 00:08:43.788 [2024-07-14 21:07:55.127103] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:43.788 [2024-07-14 21:07:55.289753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.788 [2024-07-14 21:07:55.289934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.788 [2024-07-14 21:07:55.289956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.744 21:07:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:44.744 21:07:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:08:44.744 21:07:55 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:44.744 I/O targets: 00:08:44.744 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:44.744 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:44.744 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:44.744 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:44.744 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:44.744 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:44.744 00:08:44.745 00:08:44.745 CUnit - A unit testing framework for C - Version 2.1-3 00:08:44.745 http://cunit.sourceforge.net/ 00:08:44.745 00:08:44.745 00:08:44.745 Suite: bdevio tests on: Nvme3n1 00:08:44.745 Test: blockdev write read block ...passed 00:08:44.745 Test: blockdev write zeroes read block ...passed 00:08:44.745 Test: blockdev write zeroes read no split ...passed 00:08:44.745 Test: blockdev write zeroes read split ...passed 00:08:44.745 Test: blockdev write zeroes read split partial ...passed 00:08:44.745 Test: blockdev reset ...[2024-07-14 21:07:56.092640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:44.745 [2024-07-14 21:07:56.096747] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:44.745 passed 00:08:44.745 Test: blockdev write read 8 blocks ...passed 00:08:44.745 Test: blockdev write read size > 128k ...passed 00:08:44.745 Test: blockdev write read invalid size ...passed 00:08:44.745 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:44.745 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:44.745 Test: blockdev write read max offset ...passed 00:08:44.745 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:44.745 Test: blockdev writev readv 8 blocks ...passed 00:08:44.745 Test: blockdev writev readv 30 x 1block ...passed 00:08:44.745 Test: blockdev writev readv block ...passed 00:08:44.745 Test: blockdev writev readv size > 128k ...passed 00:08:44.745 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:44.745 Test: blockdev comparev and writev ...[2024-07-14 21:07:56.106019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27760a000 len:0x1000 00:08:44.745 [2024-07-14 21:07:56.106115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:44.745 passed 00:08:44.745 Test: blockdev nvme passthru rw ...passed 00:08:44.745 Test: blockdev nvme passthru vendor specific ...[2024-07-14 21:07:56.106952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:44.745 passed 00:08:44.745 Test: blockdev nvme admin passthru ...[2024-07-14 21:07:56.107040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:44.745 passed 00:08:44.745 Test: blockdev copy ...passed 00:08:44.745 Suite: bdevio tests on: Nvme2n3 00:08:44.745 Test: blockdev write read block ...passed 00:08:44.745 Test: blockdev write zeroes read block ...passed 00:08:44.745 Test: blockdev write zeroes read no split ...passed 00:08:44.745 Test: blockdev write zeroes read split ...passed 00:08:44.745 Test: blockdev write zeroes read split partial ...passed 00:08:44.745 Test: blockdev reset ...[2024-07-14 21:07:56.173171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:44.745 [2024-07-14 21:07:56.177579] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:44.745 passed 00:08:44.745 Test: blockdev write read 8 blocks ...passed 00:08:44.745 Test: blockdev write read size > 128k ...passed 00:08:44.745 Test: blockdev write read invalid size ...passed 00:08:44.745 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:44.745 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:44.745 Test: blockdev write read max offset ...passed 00:08:44.745 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:44.745 Test: blockdev writev readv 8 blocks ...passed 00:08:44.745 Test: blockdev writev readv 30 x 1block ...passed 00:08:44.745 Test: blockdev writev readv block ...passed 00:08:44.745 Test: blockdev writev readv size > 128k ...passed 00:08:44.745 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:44.745 Test: blockdev comparev and writev ...[2024-07-14 21:07:56.185707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26e404000 len:0x1000 00:08:44.745 [2024-07-14 21:07:56.185766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:44.745 passed 00:08:44.745 Test: blockdev nvme passthru rw ...passed 00:08:44.745 Test: blockdev nvme passthru vendor specific ...[2024-07-14 21:07:56.186611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:44.745 [2024-07-14 21:07:56.186668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:44.745 passed 00:08:44.745 Test: blockdev nvme admin passthru ...passed 00:08:44.745 Test: blockdev copy ...passed 00:08:44.745 Suite: bdevio tests on: Nvme2n2 00:08:44.745 Test: blockdev write read block ...passed 00:08:44.745 Test: blockdev write zeroes read block ...passed 00:08:44.745 Test: blockdev write zeroes read no split ...passed 00:08:44.745 Test: blockdev write zeroes read split ...passed 00:08:44.745 Test: blockdev write zeroes read split partial ...passed 00:08:44.745 Test: blockdev reset ...[2024-07-14 21:07:56.247536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:44.745 [2024-07-14 21:07:56.251363] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:44.745 passed 00:08:44.745 Test: blockdev write read 8 blocks ...passed 00:08:44.745 Test: blockdev write read size > 128k ...passed 00:08:44.745 Test: blockdev write read invalid size ...passed 00:08:44.745 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:44.745 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:44.745 Test: blockdev write read max offset ...passed 00:08:44.745 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:44.745 Test: blockdev writev readv 8 blocks ...passed 00:08:44.745 Test: blockdev writev readv 30 x 1block ...passed 00:08:44.745 Test: blockdev writev readv block ...passed 00:08:44.745 Test: blockdev writev readv size > 128k ...passed 00:08:44.745 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:44.745 Test: blockdev comparev and writev ...[2024-07-14 21:07:56.259739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27f83a000 len:0x1000 00:08:44.745 [2024-07-14 21:07:56.259808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:44.745 passed 00:08:44.745 Test: blockdev nvme passthru rw ...passed 00:08:44.745 Test: blockdev nvme passthru vendor specific ...[2024-07-14 21:07:56.260741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:44.745 [2024-07-14 21:07:56.260781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:44.745 passed 00:08:44.745 Test: blockdev nvme admin passthru ...passed 00:08:44.745 Test: blockdev copy ...passed 00:08:44.745 Suite: bdevio tests on: Nvme2n1 00:08:44.745 Test: blockdev write read block ...passed 00:08:44.745 Test: blockdev write zeroes read block ...passed 00:08:44.745 Test: blockdev write zeroes read no split ...passed 00:08:45.005 Test: blockdev write zeroes read split ...passed 00:08:45.005 Test: blockdev write zeroes read split partial ...passed 00:08:45.005 Test: blockdev reset ...[2024-07-14 21:07:56.323598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:45.005 passed 00:08:45.005 Test: blockdev write read 8 blocks ...[2024-07-14 21:07:56.327711] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:45.005 passed 00:08:45.005 Test: blockdev write read size > 128k ...passed 00:08:45.005 Test: blockdev write read invalid size ...passed 00:08:45.005 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:45.005 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:45.005 Test: blockdev write read max offset ...passed 00:08:45.005 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:45.005 Test: blockdev writev readv 8 blocks ...passed 00:08:45.005 Test: blockdev writev readv 30 x 1block ...passed 00:08:45.005 Test: blockdev writev readv block ...passed 00:08:45.005 Test: blockdev writev readv size > 128k ...passed 00:08:45.005 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:45.005 Test: blockdev comparev and writev ...[2024-07-14 21:07:56.336284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27f834000 len:0x1000 00:08:45.005 [2024-07-14 21:07:56.336359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:45.005 passed 00:08:45.005 Test: blockdev nvme passthru rw ...passed 00:08:45.005 Test: blockdev nvme passthru vendor specific ...passed 00:08:45.005 Test: blockdev nvme admin passthru ...[2024-07-14 21:07:56.337423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:45.005 [2024-07-14 21:07:56.337484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:45.005 passed 00:08:45.005 Test: blockdev copy ...passed 00:08:45.005 Suite: bdevio tests on: Nvme1n1 00:08:45.005 Test: blockdev write read block ...passed 00:08:45.005 Test: blockdev write zeroes read block ...passed 00:08:45.005 Test: blockdev write zeroes read no split ...passed 00:08:45.005 Test: blockdev write zeroes read split ...passed 00:08:45.005 Test: blockdev write zeroes read split partial ...passed 00:08:45.005 Test: blockdev reset ...[2024-07-14 21:07:56.394866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:45.005 [2024-07-14 21:07:56.398648] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:45.005 passed 00:08:45.005 Test: blockdev write read 8 blocks ...passed 00:08:45.005 Test: blockdev write read size > 128k ...passed 00:08:45.005 Test: blockdev write read invalid size ...passed 00:08:45.005 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:45.005 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:45.005 Test: blockdev write read max offset ...passed 00:08:45.005 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:45.005 Test: blockdev writev readv 8 blocks ...passed 00:08:45.005 Test: blockdev writev readv 30 x 1block ...passed 00:08:45.005 Test: blockdev writev readv block ...passed 00:08:45.005 Test: blockdev writev readv size > 128k ...passed 00:08:45.005 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:45.005 Test: blockdev comparev and writev ...[2024-07-14 21:07:56.407951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27f830000 len:0x1000 00:08:45.005 [2024-07-14 21:07:56.408023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:45.005 passed 00:08:45.005 Test: blockdev nvme passthru rw ...passed 00:08:45.005 Test: blockdev nvme passthru vendor specific ...[2024-07-14 21:07:56.408857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:45.005 [2024-07-14 21:07:56.408929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:45.005 passed 00:08:45.005 Test: blockdev nvme admin passthru ...passed 00:08:45.005 Test: blockdev copy ...passed 00:08:45.005 Suite: bdevio tests on: Nvme0n1 00:08:45.005 Test: blockdev write read block ...passed 00:08:45.005 Test: blockdev write zeroes read block ...passed 00:08:45.005 Test: blockdev write zeroes read no split ...passed 00:08:45.005 Test: blockdev write zeroes read split ...passed 00:08:45.005 Test: blockdev write zeroes read split partial ...passed 00:08:45.005 Test: blockdev reset ...[2024-07-14 21:07:56.465344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:45.005 [2024-07-14 21:07:56.469067] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:45.005 passed 00:08:45.005 Test: blockdev write read 8 blocks ...passed 00:08:45.005 Test: blockdev write read size > 128k ...passed 00:08:45.005 Test: blockdev write read invalid size ...passed 00:08:45.005 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:45.005 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:45.005 Test: blockdev write read max offset ...passed 00:08:45.005 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:45.005 Test: blockdev writev readv 8 blocks ...passed 00:08:45.005 Test: blockdev writev readv 30 x 1block ...passed 00:08:45.005 Test: blockdev writev readv block ...passed 00:08:45.005 Test: blockdev writev readv size > 128k ...passed 00:08:45.005 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:45.005 Test: blockdev comparev and writev ...passed 00:08:45.005 Test: blockdev nvme passthru rw ...[2024-07-14 21:07:56.477907] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:45.005 separate metadata which is not supported yet. 00:08:45.005 passed 00:08:45.005 Test: blockdev nvme passthru vendor specific ...[2024-07-14 21:07:56.478485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:45.005 passed 00:08:45.005 Test: blockdev nvme admin passthru ...[2024-07-14 21:07:56.478596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:45.005 passed 00:08:45.005 Test: blockdev copy ...passed 00:08:45.005 00:08:45.005 Run Summary: Type Total Ran Passed Failed Inactive 00:08:45.005 suites 6 6 n/a 0 0 00:08:45.005 tests 138 138 138 0 0 00:08:45.005 asserts 893 893 893 0 n/a 00:08:45.005 00:08:45.005 Elapsed time = 1.213 seconds 00:08:45.005 0 00:08:45.005 21:07:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 66126 00:08:45.005 21:07:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 66126 ']' 00:08:45.005 21:07:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 66126 00:08:45.005 21:07:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:08:45.005 21:07:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:45.005 21:07:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66126 00:08:45.005 21:07:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:45.005 21:07:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:45.005 21:07:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66126' 00:08:45.005 killing process with pid 66126 00:08:45.005 21:07:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 66126 00:08:45.006 21:07:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 66126 00:08:45.944 21:07:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:08:45.944 00:08:45.944 real 0m2.513s 00:08:45.944 user 0m6.307s 00:08:45.944 sys 0m0.301s 00:08:45.944 21:07:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:45.944 21:07:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:45.944 ************************************ 00:08:45.944 END TEST bdev_bounds 00:08:45.944 ************************************ 00:08:45.944 21:07:57 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:45.944 21:07:57 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:45.944 21:07:57 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:45.944 21:07:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.944 21:07:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.944 ************************************ 00:08:45.944 START TEST bdev_nbd 00:08:45.944 ************************************ 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=66180 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 66180 /var/tmp/spdk-nbd.sock 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 66180 ']' 00:08:45.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:45.944 21:07:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:46.203 [2024-07-14 21:07:57.550712] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:46.203 [2024-07-14 21:07:57.550885] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.203 [2024-07-14 21:07:57.709320] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.462 [2024-07-14 21:07:57.887187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.030 21:07:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:47.030 21:07:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:08:47.030 21:07:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:47.030 21:07:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.030 21:07:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:47.030 21:07:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:47.030 21:07:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:47.030 21:07:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.030 21:07:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:47.030 21:07:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:47.030 21:07:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:47.030 21:07:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:47.030 21:07:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:47.030 21:07:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:47.030 21:07:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:47.289 21:07:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:47.289 21:07:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:47.289 21:07:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:47.289 21:07:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:47.289 21:07:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:47.289 21:07:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:47.289 21:07:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:47.289 21:07:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:47.289 21:07:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:47.289 21:07:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:47.289 21:07:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:47.289 21:07:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:47.289 1+0 records in 00:08:47.289 1+0 records out 00:08:47.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000709384 s, 5.8 MB/s 00:08:47.290 21:07:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.290 21:07:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:47.290 21:07:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.290 21:07:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:47.290 21:07:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:47.290 21:07:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:47.290 21:07:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:47.290 21:07:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:47.548 21:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:47.548 21:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:47.548 21:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:47.548 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:47.548 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:47.808 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:47.808 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:47.808 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:47.808 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:47.808 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:47.808 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:47.808 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:47.808 1+0 records in 00:08:47.808 1+0 records out 00:08:47.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000906636 s, 4.5 MB/s 00:08:47.808 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.808 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:47.808 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.808 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:47.808 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:47.808 21:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:47.808 21:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:47.808 21:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:48.066 21:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:48.066 21:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:48.066 21:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:48.066 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:48.066 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:48.066 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:48.066 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:48.066 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:48.066 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:48.066 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:48.066 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:48.066 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:48.066 1+0 records in 00:08:48.066 1+0 records out 00:08:48.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00065633 s, 6.2 MB/s 00:08:48.066 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.066 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:48.066 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.066 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:48.066 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:48.066 21:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:48.066 21:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:48.066 21:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:48.325 21:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:48.325 21:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:48.325 21:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:48.325 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:48.325 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:48.325 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:48.325 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:48.325 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:48.325 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:48.325 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:48.325 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:48.325 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:48.325 1+0 records in 00:08:48.325 1+0 records out 00:08:48.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000734236 s, 5.6 MB/s 00:08:48.325 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.325 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:48.325 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.325 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:48.325 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:48.325 21:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:48.325 21:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:48.325 21:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:48.584 21:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:48.584 21:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:48.584 21:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:48.584 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:48.584 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:48.584 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:48.584 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:48.584 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:48.584 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:48.584 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:48.584 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:48.584 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:48.584 1+0 records in 00:08:48.584 1+0 records out 00:08:48.584 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000787295 s, 5.2 MB/s 00:08:48.584 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.584 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:48.584 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.584 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:48.584 21:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:48.584 21:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:48.584 21:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:48.584 21:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:48.844 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:48.844 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:48.844 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:48.844 21:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:48.844 21:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:48.844 21:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:48.844 21:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:48.844 21:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:48.844 21:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:48.844 21:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:48.844 21:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:48.844 21:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:48.844 1+0 records in 00:08:48.844 1+0 records out 00:08:48.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000938192 s, 4.4 MB/s 00:08:48.844 21:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.844 21:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:48.844 21:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.844 21:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:48.844 21:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:48.844 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:48.844 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:48.844 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:49.103 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:49.103 { 00:08:49.103 "nbd_device": "/dev/nbd0", 00:08:49.103 "bdev_name": "Nvme0n1" 00:08:49.103 }, 00:08:49.103 { 00:08:49.103 "nbd_device": "/dev/nbd1", 00:08:49.103 "bdev_name": "Nvme1n1" 00:08:49.103 }, 00:08:49.104 { 00:08:49.104 "nbd_device": "/dev/nbd2", 00:08:49.104 "bdev_name": "Nvme2n1" 00:08:49.104 }, 00:08:49.104 { 00:08:49.104 "nbd_device": "/dev/nbd3", 00:08:49.104 "bdev_name": "Nvme2n2" 00:08:49.104 }, 00:08:49.104 { 00:08:49.104 "nbd_device": "/dev/nbd4", 00:08:49.104 "bdev_name": "Nvme2n3" 00:08:49.104 }, 00:08:49.104 { 00:08:49.104 "nbd_device": "/dev/nbd5", 00:08:49.104 "bdev_name": "Nvme3n1" 00:08:49.104 } 00:08:49.104 ]' 00:08:49.104 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:49.104 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:49.104 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:49.104 { 00:08:49.104 "nbd_device": "/dev/nbd0", 00:08:49.104 "bdev_name": "Nvme0n1" 00:08:49.104 }, 00:08:49.104 { 00:08:49.104 "nbd_device": "/dev/nbd1", 00:08:49.104 "bdev_name": "Nvme1n1" 00:08:49.104 }, 00:08:49.104 { 00:08:49.104 "nbd_device": "/dev/nbd2", 00:08:49.104 "bdev_name": "Nvme2n1" 00:08:49.104 }, 00:08:49.104 { 00:08:49.104 "nbd_device": "/dev/nbd3", 00:08:49.104 "bdev_name": "Nvme2n2" 00:08:49.104 }, 00:08:49.104 { 00:08:49.104 "nbd_device": "/dev/nbd4", 00:08:49.104 "bdev_name": "Nvme2n3" 00:08:49.104 }, 00:08:49.104 { 00:08:49.104 "nbd_device": "/dev/nbd5", 00:08:49.104 "bdev_name": "Nvme3n1" 00:08:49.104 } 00:08:49.104 ]' 00:08:49.104 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:49.104 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.104 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:49.104 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:49.104 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:49.104 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.104 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:49.363 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:49.363 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:49.363 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:49.363 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.363 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.363 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:49.363 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:49.363 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.363 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.363 21:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:49.622 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:49.622 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:49.622 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:49.623 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.623 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.623 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:49.623 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:49.623 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.623 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.623 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:49.881 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:49.881 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:49.881 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:49.881 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.881 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.881 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:49.881 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:49.881 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.882 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.882 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:50.139 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:50.139 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:50.139 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:50.139 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.139 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.139 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:50.139 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.139 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.139 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.139 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:50.399 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:50.399 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:50.399 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:50.399 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.399 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.399 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:50.399 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.399 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.399 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.399 21:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:50.658 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:50.658 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:50.658 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:50.658 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.658 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.658 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:50.658 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.658 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.658 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:50.658 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.658 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:50.916 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:50.916 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:50.916 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:50.916 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:50.916 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:50.916 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:50.916 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:50.916 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:50.916 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:50.916 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:50.916 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:50.916 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:50.917 21:08:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:50.917 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.917 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:50.917 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:50.917 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:50.917 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:50.917 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:50.917 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.917 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:50.917 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:50.917 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:50.917 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:50.917 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:50.917 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:50.917 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:50.917 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:51.175 /dev/nbd0 00:08:51.175 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:51.175 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:51.175 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:51.175 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:51.175 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:51.175 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:51.175 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:51.175 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:51.175 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:51.175 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:51.175 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:51.175 1+0 records in 00:08:51.175 1+0 records out 00:08:51.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627705 s, 6.5 MB/s 00:08:51.175 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.175 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:51.175 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.175 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:51.175 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:51.175 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:51.175 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:51.175 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:51.433 /dev/nbd1 00:08:51.433 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:51.433 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:51.433 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:51.433 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:51.433 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:51.433 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:51.433 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:51.433 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:51.433 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:51.433 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:51.433 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:51.433 1+0 records in 00:08:51.433 1+0 records out 00:08:51.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600674 s, 6.8 MB/s 00:08:51.433 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.433 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:51.433 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.433 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:51.433 21:08:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:51.433 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:51.433 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:51.433 21:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:51.692 /dev/nbd10 00:08:51.692 21:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:51.692 21:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:51.692 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:08:51.692 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:51.692 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:51.692 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:51.692 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:08:51.692 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:51.692 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:51.692 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:51.692 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:51.692 1+0 records in 00:08:51.692 1+0 records out 00:08:51.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000733018 s, 5.6 MB/s 00:08:51.692 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.692 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:51.692 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.692 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:51.692 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:51.692 21:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:51.692 21:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:51.692 21:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:51.949 /dev/nbd11 00:08:51.949 21:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:51.949 21:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:51.949 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:08:51.949 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:51.949 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:51.949 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:51.949 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:08:51.949 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:51.949 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:51.949 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:51.949 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:51.949 1+0 records in 00:08:51.949 1+0 records out 00:08:51.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000893361 s, 4.6 MB/s 00:08:51.949 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.949 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:51.949 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.949 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:51.949 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:51.949 21:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:51.949 21:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:51.949 21:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:52.206 /dev/nbd12 00:08:52.206 21:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:52.206 21:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:52.206 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:08:52.206 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:52.206 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:52.206 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:52.206 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:08:52.206 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:52.206 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:52.206 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:52.206 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:52.206 1+0 records in 00:08:52.206 1+0 records out 00:08:52.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000907666 s, 4.5 MB/s 00:08:52.206 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.206 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:52.206 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.206 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:52.206 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:52.206 21:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:52.206 21:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:52.206 21:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:52.465 /dev/nbd13 00:08:52.465 21:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:52.465 21:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:52.465 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:08:52.465 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:52.465 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:52.465 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:52.465 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:08:52.465 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:52.465 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:52.465 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:52.465 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:52.465 1+0 records in 00:08:52.465 1+0 records out 00:08:52.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110798 s, 3.7 MB/s 00:08:52.465 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.465 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:52.465 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.465 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:52.465 21:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:52.465 21:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:52.465 21:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:52.465 21:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:52.465 21:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.465 21:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:52.724 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:52.724 { 00:08:52.724 "nbd_device": "/dev/nbd0", 00:08:52.724 "bdev_name": "Nvme0n1" 00:08:52.724 }, 00:08:52.724 { 00:08:52.724 "nbd_device": "/dev/nbd1", 00:08:52.724 "bdev_name": "Nvme1n1" 00:08:52.724 }, 00:08:52.724 { 00:08:52.724 "nbd_device": "/dev/nbd10", 00:08:52.724 "bdev_name": "Nvme2n1" 00:08:52.724 }, 00:08:52.724 { 00:08:52.724 "nbd_device": "/dev/nbd11", 00:08:52.724 "bdev_name": "Nvme2n2" 00:08:52.724 }, 00:08:52.724 { 00:08:52.724 "nbd_device": "/dev/nbd12", 00:08:52.724 "bdev_name": "Nvme2n3" 00:08:52.724 }, 00:08:52.724 { 00:08:52.724 "nbd_device": "/dev/nbd13", 00:08:52.724 "bdev_name": "Nvme3n1" 00:08:52.724 } 00:08:52.724 ]' 00:08:52.724 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:52.724 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:52.724 { 00:08:52.724 "nbd_device": "/dev/nbd0", 00:08:52.724 "bdev_name": "Nvme0n1" 00:08:52.724 }, 00:08:52.724 { 00:08:52.724 "nbd_device": "/dev/nbd1", 00:08:52.724 "bdev_name": "Nvme1n1" 00:08:52.724 }, 00:08:52.724 { 00:08:52.724 "nbd_device": "/dev/nbd10", 00:08:52.724 "bdev_name": "Nvme2n1" 00:08:52.724 }, 00:08:52.724 { 00:08:52.724 "nbd_device": "/dev/nbd11", 00:08:52.724 "bdev_name": "Nvme2n2" 00:08:52.724 }, 00:08:52.724 { 00:08:52.724 "nbd_device": "/dev/nbd12", 00:08:52.724 "bdev_name": "Nvme2n3" 00:08:52.724 }, 00:08:52.724 { 00:08:52.724 "nbd_device": "/dev/nbd13", 00:08:52.724 "bdev_name": "Nvme3n1" 00:08:52.724 } 00:08:52.724 ]' 00:08:52.724 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:52.724 /dev/nbd1 00:08:52.724 /dev/nbd10 00:08:52.724 /dev/nbd11 00:08:52.724 /dev/nbd12 00:08:52.724 /dev/nbd13' 00:08:52.724 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:52.724 /dev/nbd1 00:08:52.724 /dev/nbd10 00:08:52.724 /dev/nbd11 00:08:52.724 /dev/nbd12 00:08:52.724 /dev/nbd13' 00:08:52.724 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:52.724 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:52.724 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:52.724 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:52.724 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:52.724 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:52.724 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:52.724 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:52.724 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:52.724 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:52.724 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:52.724 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:52.724 256+0 records in 00:08:52.724 256+0 records out 00:08:52.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00812057 s, 129 MB/s 00:08:52.724 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:52.724 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:52.983 256+0 records in 00:08:52.983 256+0 records out 00:08:52.983 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.171688 s, 6.1 MB/s 00:08:52.983 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:52.983 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:53.242 256+0 records in 00:08:53.242 256+0 records out 00:08:53.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.181218 s, 5.8 MB/s 00:08:53.242 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:53.242 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:53.242 256+0 records in 00:08:53.242 256+0 records out 00:08:53.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.209965 s, 5.0 MB/s 00:08:53.242 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:53.242 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:53.501 256+0 records in 00:08:53.501 256+0 records out 00:08:53.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15818 s, 6.6 MB/s 00:08:53.501 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:53.501 21:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:53.760 256+0 records in 00:08:53.760 256+0 records out 00:08:53.760 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.181553 s, 5.8 MB/s 00:08:53.760 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:53.760 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:53.760 256+0 records in 00:08:53.760 256+0 records out 00:08:53.760 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.176457 s, 5.9 MB/s 00:08:53.760 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:53.760 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:53.760 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:53.760 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:53.760 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:53.760 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:53.760 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:53.760 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:53.760 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:54.019 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:54.019 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:54.019 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:54.019 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:54.019 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:54.019 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:54.019 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:54.019 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:54.019 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:54.019 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:54.019 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:54.019 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:54.019 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:54.019 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:54.019 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:54.019 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:54.019 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:54.019 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:54.278 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:54.278 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:54.278 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:54.278 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:54.278 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:54.278 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:54.278 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:54.278 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:54.278 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:54.278 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:54.537 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:54.537 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:54.537 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:54.537 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:54.537 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:54.537 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:54.537 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:54.537 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:54.537 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:54.537 21:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:54.796 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:54.796 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:54.796 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:54.796 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:54.796 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:54.796 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:54.796 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:54.796 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:54.796 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:54.796 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:55.055 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:55.055 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:55.055 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:55.055 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:55.055 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:55.055 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:55.055 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:55.055 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:55.055 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:55.055 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:55.312 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:55.312 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:55.312 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:55.312 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:55.312 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:55.312 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:55.312 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:55.312 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:55.312 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:55.312 21:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:55.570 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:55.570 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:55.570 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:55.570 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:55.570 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:55.570 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:55.570 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:55.570 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:55.570 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:55.570 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.570 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:55.828 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:55.828 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:55.828 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:55.828 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:55.829 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:55.829 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:55.829 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:55.829 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:55.829 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:55.829 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:55.829 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:55.829 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:55.829 21:08:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:55.829 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.829 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:55.829 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:55.829 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:55.829 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:56.087 malloc_lvol_verify 00:08:56.087 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:56.346 6a79aa59-ee71-4f45-bc82-bdedea253e87 00:08:56.346 21:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:56.605 ded7bd43-207b-40fa-8870-6d8aee1a3601 00:08:56.605 21:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:56.862 /dev/nbd0 00:08:56.862 21:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:56.862 mke2fs 1.46.5 (30-Dec-2021) 00:08:56.863 Discarding device blocks: 0/4096 done 00:08:56.863 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:56.863 00:08:56.863 Allocating group tables: 0/1 done 00:08:56.863 Writing inode tables: 0/1 done 00:08:56.863 Creating journal (1024 blocks): done 00:08:56.863 Writing superblocks and filesystem accounting information: 0/1 done 00:08:56.863 00:08:56.863 21:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:56.863 21:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:56.863 21:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:56.863 21:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:56.863 21:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:56.863 21:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:56.863 21:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:56.863 21:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:57.120 21:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:57.120 21:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:57.120 21:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:57.120 21:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:57.120 21:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:57.120 21:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:57.120 21:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:57.120 21:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:57.120 21:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:57.120 21:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:57.120 21:08:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 66180 00:08:57.120 21:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 66180 ']' 00:08:57.120 21:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 66180 00:08:57.120 21:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:08:57.120 21:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:57.120 21:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66180 00:08:57.120 killing process with pid 66180 00:08:57.120 21:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:57.120 21:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:57.120 21:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66180' 00:08:57.120 21:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 66180 00:08:57.120 21:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 66180 00:08:58.054 21:08:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:08:58.054 00:08:58.054 real 0m12.115s 00:08:58.054 user 0m17.014s 00:08:58.054 sys 0m3.954s 00:08:58.054 21:08:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.054 21:08:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:58.054 ************************************ 00:08:58.054 END TEST bdev_nbd 00:08:58.054 ************************************ 00:08:58.338 21:08:09 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:58.338 21:08:09 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:08:58.338 21:08:09 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:08:58.338 skipping fio tests on NVMe due to multi-ns failures. 00:08:58.338 21:08:09 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:58.338 21:08:09 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:58.338 21:08:09 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:58.338 21:08:09 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:08:58.338 21:08:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.338 21:08:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:58.338 ************************************ 00:08:58.338 START TEST bdev_verify 00:08:58.338 ************************************ 00:08:58.338 21:08:09 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:58.338 [2024-07-14 21:08:09.715871] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:58.338 [2024-07-14 21:08:09.716039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66581 ] 00:08:58.338 [2024-07-14 21:08:09.870502] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:58.621 [2024-07-14 21:08:10.039852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.622 [2024-07-14 21:08:10.039892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.189 Running I/O for 5 seconds... 00:09:04.490 00:09:04.490 Latency(us) 00:09:04.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.490 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:04.490 Verification LBA range: start 0x0 length 0xbd0bd 00:09:04.490 Nvme0n1 : 5.08 1538.28 6.01 0.00 0.00 83020.78 14239.19 76260.07 00:09:04.490 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:04.490 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:04.490 Nvme0n1 : 5.08 1574.10 6.15 0.00 0.00 80321.35 4051.32 75306.82 00:09:04.490 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:04.490 Verification LBA range: start 0x0 length 0xa0000 00:09:04.490 Nvme1n1 : 5.08 1537.69 6.01 0.00 0.00 82932.46 14715.81 73400.32 00:09:04.490 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:04.490 Verification LBA range: start 0xa0000 length 0xa0000 00:09:04.490 Nvme1n1 : 5.08 1573.34 6.15 0.00 0.00 80267.08 5064.15 77689.95 00:09:04.490 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:04.490 Verification LBA range: start 0x0 length 0x80000 00:09:04.490 Nvme2n1 : 5.08 1537.12 6.00 0.00 0.00 82848.30 14000.87 70540.57 00:09:04.490 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:04.490 Verification LBA range: start 0x80000 length 0x80000 00:09:04.490 Nvme2n1 : 5.07 1566.33 6.12 0.00 0.00 81481.56 16086.11 77689.95 00:09:04.490 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:04.490 Verification LBA range: start 0x0 length 0x80000 00:09:04.490 Nvme2n2 : 5.08 1536.55 6.00 0.00 0.00 82713.96 14179.61 70063.94 00:09:04.490 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:04.490 Verification LBA range: start 0x80000 length 0x80000 00:09:04.490 Nvme2n2 : 5.07 1565.69 6.12 0.00 0.00 81219.51 18826.71 73400.32 00:09:04.490 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:04.490 Verification LBA range: start 0x0 length 0x80000 00:09:04.490 Nvme2n3 : 5.08 1535.86 6.00 0.00 0.00 82555.81 14954.12 72923.69 00:09:04.490 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:04.490 Verification LBA range: start 0x80000 length 0x80000 00:09:04.490 Nvme2n3 : 5.07 1565.01 6.11 0.00 0.00 81069.46 20256.58 68634.07 00:09:04.490 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:04.490 Verification LBA range: start 0x0 length 0x20000 00:09:04.490 Nvme3n1 : 5.09 1535.12 6.00 0.00 0.00 82403.89 15192.44 75783.45 00:09:04.490 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:04.490 Verification LBA range: start 0x20000 length 0x20000 00:09:04.490 Nvme3n1 : 5.07 1564.38 6.11 0.00 0.00 80911.92 19065.02 72447.07 00:09:04.490 =================================================================================================================== 00:09:04.490 Total : 18629.49 72.77 0.00 0.00 81802.54 4051.32 77689.95 00:09:05.864 00:09:05.864 real 0m7.457s 00:09:05.864 user 0m13.734s 00:09:05.864 sys 0m0.230s 00:09:05.864 21:08:17 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.864 21:08:17 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:05.864 ************************************ 00:09:05.864 END TEST bdev_verify 00:09:05.864 ************************************ 00:09:05.864 21:08:17 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:05.864 21:08:17 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:05.864 21:08:17 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:09:05.864 21:08:17 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.864 21:08:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:05.864 ************************************ 00:09:05.864 START TEST bdev_verify_big_io 00:09:05.864 ************************************ 00:09:05.864 21:08:17 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:05.864 [2024-07-14 21:08:17.242085] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:05.864 [2024-07-14 21:08:17.242268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66674 ] 00:09:05.864 [2024-07-14 21:08:17.396734] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:06.123 [2024-07-14 21:08:17.554644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.123 [2024-07-14 21:08:17.554662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.058 Running I/O for 5 seconds... 00:09:13.646 00:09:13.646 Latency(us) 00:09:13.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.646 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:13.646 Verification LBA range: start 0x0 length 0xbd0b 00:09:13.646 Nvme0n1 : 5.67 133.04 8.31 0.00 0.00 922939.60 28955.00 1006632.96 00:09:13.646 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:13.646 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:13.646 Nvme0n1 : 5.57 137.94 8.62 0.00 0.00 898410.43 22163.08 1006632.96 00:09:13.646 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:13.646 Verification LBA range: start 0x0 length 0xa000 00:09:13.646 Nvme1n1 : 5.67 135.43 8.46 0.00 0.00 888473.91 113436.86 854112.81 00:09:13.646 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:13.646 Verification LBA range: start 0xa000 length 0xa000 00:09:13.646 Nvme1n1 : 5.57 137.87 8.62 0.00 0.00 872317.98 108193.98 831234.79 00:09:13.646 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:13.646 Verification LBA range: start 0x0 length 0x8000 00:09:13.646 Nvme2n1 : 5.67 135.36 8.46 0.00 0.00 861914.45 148707.14 876990.84 00:09:13.646 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:13.646 Verification LBA range: start 0x8000 length 0x8000 00:09:13.646 Nvme2n1 : 5.68 139.15 8.70 0.00 0.00 834265.29 102951.10 819795.78 00:09:13.646 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:13.646 Verification LBA range: start 0x0 length 0x8000 00:09:13.646 Nvme2n2 : 5.79 143.62 8.98 0.00 0.00 796313.81 22401.40 896055.85 00:09:13.646 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:13.646 Verification LBA range: start 0x8000 length 0x8000 00:09:13.646 Nvme2n2 : 5.80 150.58 9.41 0.00 0.00 757033.92 34078.72 838860.80 00:09:13.646 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:13.646 Verification LBA range: start 0x0 length 0x8000 00:09:13.646 Nvme2n3 : 5.87 148.91 9.31 0.00 0.00 743365.70 20137.43 918933.88 00:09:13.646 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:13.646 Verification LBA range: start 0x8000 length 0x8000 00:09:13.646 Nvme2n3 : 5.81 154.24 9.64 0.00 0.00 718643.47 24069.59 857925.82 00:09:13.646 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:13.646 Verification LBA range: start 0x0 length 0x2000 00:09:13.646 Nvme3n1 : 5.88 163.33 10.21 0.00 0.00 662868.08 2129.92 941811.90 00:09:13.646 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:13.646 Verification LBA range: start 0x2000 length 0x2000 00:09:13.646 Nvme3n1 : 5.88 172.88 10.81 0.00 0.00 623576.58 852.71 1044763.00 00:09:13.646 =================================================================================================================== 00:09:13.646 Total : 1752.33 109.52 0.00 0.00 789303.94 852.71 1044763.00 00:09:14.213 00:09:14.213 real 0m8.596s 00:09:14.213 user 0m15.990s 00:09:14.213 sys 0m0.245s 00:09:14.213 21:08:25 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:14.213 21:08:25 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:14.213 ************************************ 00:09:14.213 END TEST bdev_verify_big_io 00:09:14.213 ************************************ 00:09:14.472 21:08:25 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:14.472 21:08:25 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:14.472 21:08:25 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:14.472 21:08:25 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.472 21:08:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:14.472 ************************************ 00:09:14.472 START TEST bdev_write_zeroes 00:09:14.472 ************************************ 00:09:14.472 21:08:25 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:14.472 [2024-07-14 21:08:25.913426] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:14.472 [2024-07-14 21:08:25.913616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66784 ] 00:09:14.731 [2024-07-14 21:08:26.087863] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.731 [2024-07-14 21:08:26.247547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.298 Running I/O for 1 seconds... 00:09:16.673 00:09:16.673 Latency(us) 00:09:16.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.673 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:16.673 Nvme0n1 : 1.01 8449.86 33.01 0.00 0.00 15085.06 7328.12 25380.31 00:09:16.673 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:16.673 Nvme1n1 : 1.02 8436.16 32.95 0.00 0.00 15086.66 11379.43 25499.46 00:09:16.673 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:16.673 Nvme2n1 : 1.02 8458.04 33.04 0.00 0.00 15006.79 10843.23 24427.05 00:09:16.673 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:16.673 Nvme2n2 : 1.02 8445.05 32.99 0.00 0.00 14963.46 10307.03 24427.05 00:09:16.673 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:16.673 Nvme2n3 : 1.03 8476.47 33.11 0.00 0.00 14884.47 7149.38 24784.52 00:09:16.673 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:16.673 Nvme3n1 : 1.03 8463.51 33.06 0.00 0.00 14866.51 7328.12 24069.59 00:09:16.673 =================================================================================================================== 00:09:16.673 Total : 50729.10 198.16 0.00 0.00 14981.64 7149.38 25499.46 00:09:17.620 00:09:17.620 real 0m3.317s 00:09:17.620 user 0m2.971s 00:09:17.620 sys 0m0.223s 00:09:17.620 21:08:29 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:17.620 21:08:29 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:17.620 ************************************ 00:09:17.620 END TEST bdev_write_zeroes 00:09:17.620 ************************************ 00:09:17.879 21:08:29 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:17.879 21:08:29 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:17.879 21:08:29 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:17.879 21:08:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.879 21:08:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:17.879 ************************************ 00:09:17.879 START TEST bdev_json_nonenclosed 00:09:17.879 ************************************ 00:09:17.879 21:08:29 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:17.879 [2024-07-14 21:08:29.287112] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:17.879 [2024-07-14 21:08:29.287312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66847 ] 00:09:18.138 [2024-07-14 21:08:29.461105] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.138 [2024-07-14 21:08:29.664198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.138 [2024-07-14 21:08:29.664334] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:18.138 [2024-07-14 21:08:29.664356] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:18.138 [2024-07-14 21:08:29.664372] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:18.706 00:09:18.706 real 0m0.835s 00:09:18.706 user 0m0.590s 00:09:18.706 sys 0m0.138s 00:09:18.706 21:08:30 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:09:18.706 21:08:30 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:18.706 21:08:30 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:18.706 ************************************ 00:09:18.706 END TEST bdev_json_nonenclosed 00:09:18.706 ************************************ 00:09:18.706 21:08:30 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:09:18.706 21:08:30 blockdev_nvme -- bdev/blockdev.sh@782 -- # true 00:09:18.706 21:08:30 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:18.706 21:08:30 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:18.706 21:08:30 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.706 21:08:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:18.706 ************************************ 00:09:18.706 START TEST bdev_json_nonarray 00:09:18.707 ************************************ 00:09:18.707 21:08:30 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:18.707 [2024-07-14 21:08:30.179164] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:18.707 [2024-07-14 21:08:30.179369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66873 ] 00:09:18.965 [2024-07-14 21:08:30.346827] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.965 [2024-07-14 21:08:30.509112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.965 [2024-07-14 21:08:30.509265] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:18.965 [2024-07-14 21:08:30.509295] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:18.965 [2024-07-14 21:08:30.509312] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:19.533 00:09:19.533 real 0m0.786s 00:09:19.533 user 0m0.560s 00:09:19.533 sys 0m0.120s 00:09:19.533 21:08:30 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:09:19.533 21:08:30 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:19.533 21:08:30 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:19.533 ************************************ 00:09:19.533 END TEST bdev_json_nonarray 00:09:19.533 ************************************ 00:09:19.533 21:08:30 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:09:19.533 21:08:30 blockdev_nvme -- bdev/blockdev.sh@785 -- # true 00:09:19.533 21:08:30 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:09:19.533 21:08:30 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:09:19.533 21:08:30 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:09:19.533 21:08:30 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:09:19.533 21:08:30 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:09:19.533 21:08:30 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:19.533 21:08:30 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:19.533 21:08:30 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:09:19.533 21:08:30 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:09:19.533 21:08:30 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:09:19.533 21:08:30 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:09:19.533 00:09:19.533 real 0m41.542s 00:09:19.533 user 1m2.679s 00:09:19.533 sys 0m6.239s 00:09:19.533 21:08:30 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:19.533 21:08:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:19.533 ************************************ 00:09:19.533 END TEST blockdev_nvme 00:09:19.533 ************************************ 00:09:19.533 21:08:30 -- common/autotest_common.sh@1142 -- # return 0 00:09:19.533 21:08:30 -- spdk/autotest.sh@213 -- # uname -s 00:09:19.533 21:08:30 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:09:19.533 21:08:30 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:19.533 21:08:30 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:19.533 21:08:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.533 21:08:30 -- common/autotest_common.sh@10 -- # set +x 00:09:19.533 ************************************ 00:09:19.533 START TEST blockdev_nvme_gpt 00:09:19.533 ************************************ 00:09:19.533 21:08:30 blockdev_nvme_gpt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:19.533 * Looking for test storage... 00:09:19.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66949 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 66949 00:09:19.533 21:08:31 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:19.533 21:08:31 blockdev_nvme_gpt -- common/autotest_common.sh@829 -- # '[' -z 66949 ']' 00:09:19.533 21:08:31 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.533 21:08:31 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:19.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.533 21:08:31 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.533 21:08:31 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:19.533 21:08:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:19.792 [2024-07-14 21:08:31.187069] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:19.792 [2024-07-14 21:08:31.187265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66949 ] 00:09:20.051 [2024-07-14 21:08:31.353539] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.051 [2024-07-14 21:08:31.506705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.619 21:08:32 blockdev_nvme_gpt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:20.619 21:08:32 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # return 0 00:09:20.619 21:08:32 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:09:20.619 21:08:32 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:09:20.619 21:08:32 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:21.186 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:21.186 Waiting for block devices as requested 00:09:21.186 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.445 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.445 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.445 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:26.713 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:26.713 21:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:09:26.713 21:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:26.713 21:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:11.0/nvme/nvme0/nvme0n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n2' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n3' '/sys/bus/pci/drivers/nvme/0000:00:13.0/nvme/nvme3/nvme3c3n1') 00:09:26.713 21:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:09:26.713 21:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:09:26.713 21:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:09:26.713 21:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:09:26.713 21:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme1n1 00:09:26.713 21:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme1n1 -ms print 00:09:26.713 21:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme1n1: unrecognised disk label 00:09:26.713 BYT; 00:09:26.713 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:09:26.713 21:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme1n1: unrecognised disk label 00:09:26.713 BYT; 00:09:26.713 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\1\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:09:26.713 21:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme1n1 00:09:26.713 21:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:09:26.713 21:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme1n1 ]] 00:09:26.713 21:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:09:26.713 21:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:26.713 21:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme1n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:09:26.713 21:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:09:26.713 21:08:38 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:09:26.713 21:08:38 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:26.713 21:08:38 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:26.713 21:08:38 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:09:26.713 21:08:38 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:09:26.713 21:08:38 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:26.713 21:08:38 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:09:26.713 21:08:38 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:26.713 21:08:38 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:26.713 21:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:26.713 21:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:09:26.713 21:08:38 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:09:26.713 21:08:38 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:26.713 21:08:38 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:26.713 21:08:38 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:09:26.713 21:08:38 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:09:26.713 21:08:38 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:26.713 21:08:38 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:09:26.713 21:08:38 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:26.713 21:08:38 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:26.713 21:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:26.713 21:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme1n1 00:09:27.648 The operation has completed successfully. 00:09:27.648 21:08:39 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme1n1 00:09:29.024 The operation has completed successfully. 00:09:29.024 21:08:40 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:29.283 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:29.850 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.850 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.850 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.850 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:30.158 21:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:09:30.158 21:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.158 21:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:30.158 [] 00:09:30.158 21:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.158 21:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:09:30.158 21:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:09:30.158 21:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:30.158 21:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:30.158 21:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:30.158 21:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.158 21:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:30.416 21:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.416 21:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:09:30.416 21:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.416 21:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:30.416 21:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.416 21:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:09:30.416 21:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:09:30.416 21:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.416 21:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:30.416 21:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.416 21:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:09:30.416 21:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.416 21:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:30.416 21:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.416 21:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:30.416 21:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.416 21:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:30.416 21:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.416 21:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:09:30.416 21:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:09:30.416 21:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:09:30.416 21:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.416 21:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:30.674 21:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.674 21:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:09:30.674 21:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:09:30.675 21:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "8dc0a9c0-5377-4c9c-9c33-e164f898dc74"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "8dc0a9c0-5377-4c9c-9c33-e164f898dc74",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "eafae504-8867-42cf-84f3-e6619a12303d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "eafae504-8867-42cf-84f3-e6619a12303d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "cc9b9349-4cd1-49a5-b621-8bb706890be5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cc9b9349-4cd1-49a5-b621-8bb706890be5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "f088c1bb-900f-4fa3-8c7c-da616dafbeb2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f088c1bb-900f-4fa3-8c7c-da616dafbeb2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "2bed0b5a-edb6-42c1-94e1-fc1a7687f4b1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2bed0b5a-edb6-42c1-94e1-fc1a7687f4b1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:30.675 21:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:09:30.675 21:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:09:30.675 21:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:09:30.675 21:08:42 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 66949 00:09:30.675 21:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@948 -- # '[' -z 66949 ']' 00:09:30.675 21:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # kill -0 66949 00:09:30.675 21:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # uname 00:09:30.675 21:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:30.675 21:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66949 00:09:30.675 21:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:30.675 21:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:30.675 killing process with pid 66949 00:09:30.675 21:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66949' 00:09:30.675 21:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@967 -- # kill 66949 00:09:30.675 21:08:42 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # wait 66949 00:09:32.575 21:08:43 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:32.575 21:08:43 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:32.575 21:08:43 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:32.575 21:08:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:32.575 21:08:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:32.575 ************************************ 00:09:32.575 START TEST bdev_hello_world 00:09:32.575 ************************************ 00:09:32.575 21:08:43 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:32.575 [2024-07-14 21:08:43.904414] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:32.575 [2024-07-14 21:08:43.904620] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67577 ] 00:09:32.575 [2024-07-14 21:08:44.075872] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.833 [2024-07-14 21:08:44.230984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.400 [2024-07-14 21:08:44.787391] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:33.400 [2024-07-14 21:08:44.787452] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:09:33.400 [2024-07-14 21:08:44.787494] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:33.400 [2024-07-14 21:08:44.790384] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:33.400 [2024-07-14 21:08:44.791121] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:33.400 [2024-07-14 21:08:44.791159] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:33.400 [2024-07-14 21:08:44.791538] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:33.400 00:09:33.400 [2024-07-14 21:08:44.791589] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:34.359 00:09:34.359 real 0m1.941s 00:09:34.359 user 0m1.621s 00:09:34.359 sys 0m0.211s 00:09:34.359 21:08:45 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:34.359 21:08:45 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:34.359 ************************************ 00:09:34.359 END TEST bdev_hello_world 00:09:34.359 ************************************ 00:09:34.359 21:08:45 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:34.359 21:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:09:34.359 21:08:45 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:34.359 21:08:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:34.359 21:08:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:34.359 ************************************ 00:09:34.359 START TEST bdev_bounds 00:09:34.359 ************************************ 00:09:34.359 21:08:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:09:34.359 21:08:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=67619 00:09:34.359 21:08:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:34.359 Process bdevio pid: 67619 00:09:34.359 21:08:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 67619' 00:09:34.359 21:08:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:34.359 21:08:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 67619 00:09:34.359 21:08:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 67619 ']' 00:09:34.359 21:08:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.359 21:08:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:34.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.359 21:08:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.359 21:08:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:34.359 21:08:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:34.359 [2024-07-14 21:08:45.894226] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:34.359 [2024-07-14 21:08:45.894394] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67619 ] 00:09:34.618 [2024-07-14 21:08:46.050667] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:34.877 [2024-07-14 21:08:46.205404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.877 [2024-07-14 21:08:46.205544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.877 [2024-07-14 21:08:46.205565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.445 21:08:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:35.445 21:08:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:09:35.445 21:08:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:35.445 I/O targets: 00:09:35.445 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:09:35.445 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:09:35.445 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:35.445 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:35.445 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:35.445 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:35.445 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:35.445 00:09:35.445 00:09:35.445 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.445 http://cunit.sourceforge.net/ 00:09:35.445 00:09:35.445 00:09:35.445 Suite: bdevio tests on: Nvme3n1 00:09:35.445 Test: blockdev write read block ...passed 00:09:35.445 Test: blockdev write zeroes read block ...passed 00:09:35.445 Test: blockdev write zeroes read no split ...passed 00:09:35.445 Test: blockdev write zeroes read split ...passed 00:09:35.705 Test: blockdev write zeroes read split partial ...passed 00:09:35.705 Test: blockdev reset ...[2024-07-14 21:08:47.007477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:09:35.705 [2024-07-14 21:08:47.011107] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:35.705 passed 00:09:35.705 Test: blockdev write read 8 blocks ...passed 00:09:35.705 Test: blockdev write read size > 128k ...passed 00:09:35.705 Test: blockdev write read invalid size ...passed 00:09:35.705 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:35.705 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:35.705 Test: blockdev write read max offset ...passed 00:09:35.705 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:35.705 Test: blockdev writev readv 8 blocks ...passed 00:09:35.705 Test: blockdev writev readv 30 x 1block ...passed 00:09:35.705 Test: blockdev writev readv block ...passed 00:09:35.705 Test: blockdev writev readv size > 128k ...passed 00:09:35.705 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:35.705 Test: blockdev comparev and writev ...[2024-07-14 21:08:47.019550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29c204000 len:0x1000 00:09:35.705 [2024-07-14 21:08:47.019640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:35.705 passed 00:09:35.705 Test: blockdev nvme passthru rw ...passed 00:09:35.705 Test: blockdev nvme passthru vendor specific ...[2024-07-14 21:08:47.020475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:35.705 [2024-07-14 21:08:47.020531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:35.705 passed 00:09:35.705 Test: blockdev nvme admin passthru ...passed 00:09:35.705 Test: blockdev copy ...passed 00:09:35.705 Suite: bdevio tests on: Nvme2n3 00:09:35.705 Test: blockdev write read block ...passed 00:09:35.705 Test: blockdev write zeroes read block ...passed 00:09:35.705 Test: blockdev write zeroes read no split ...passed 00:09:35.705 Test: blockdev write zeroes read split ...passed 00:09:35.705 Test: blockdev write zeroes read split partial ...passed 00:09:35.705 Test: blockdev reset ...[2024-07-14 21:08:47.081813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:35.705 [2024-07-14 21:08:47.085965] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:35.705 passed 00:09:35.705 Test: blockdev write read 8 blocks ...passed 00:09:35.705 Test: blockdev write read size > 128k ...passed 00:09:35.705 Test: blockdev write read invalid size ...passed 00:09:35.705 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:35.705 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:35.705 Test: blockdev write read max offset ...passed 00:09:35.705 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:35.705 Test: blockdev writev readv 8 blocks ...passed 00:09:35.705 Test: blockdev writev readv 30 x 1block ...passed 00:09:35.705 Test: blockdev writev readv block ...passed 00:09:35.705 Test: blockdev writev readv size > 128k ...passed 00:09:35.705 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:35.705 Test: blockdev comparev and writev ...[2024-07-14 21:08:47.094328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29263a000 len:0x1000 00:09:35.705 [2024-07-14 21:08:47.094416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:35.705 passed 00:09:35.705 Test: blockdev nvme passthru rw ...passed 00:09:35.705 Test: blockdev nvme passthru vendor specific ...[2024-07-14 21:08:47.095339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:35.705 [2024-07-14 21:08:47.095395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:35.705 passed 00:09:35.705 Test: blockdev nvme admin passthru ...passed 00:09:35.705 Test: blockdev copy ...passed 00:09:35.705 Suite: bdevio tests on: Nvme2n2 00:09:35.705 Test: blockdev write read block ...passed 00:09:35.705 Test: blockdev write zeroes read block ...passed 00:09:35.705 Test: blockdev write zeroes read no split ...passed 00:09:35.705 Test: blockdev write zeroes read split ...passed 00:09:35.705 Test: blockdev write zeroes read split partial ...passed 00:09:35.705 Test: blockdev reset ...[2024-07-14 21:08:47.156082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:35.705 [2024-07-14 21:08:47.160272] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:35.705 passed 00:09:35.705 Test: blockdev write read 8 blocks ...passed 00:09:35.705 Test: blockdev write read size > 128k ...passed 00:09:35.705 Test: blockdev write read invalid size ...passed 00:09:35.705 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:35.705 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:35.705 Test: blockdev write read max offset ...passed 00:09:35.705 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:35.705 Test: blockdev writev readv 8 blocks ...passed 00:09:35.705 Test: blockdev writev readv 30 x 1block ...passed 00:09:35.705 Test: blockdev writev readv block ...passed 00:09:35.705 Test: blockdev writev readv size > 128k ...passed 00:09:35.705 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:35.705 Test: blockdev comparev and writev ...[2024-07-14 21:08:47.168322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x292636000 len:0x1000 00:09:35.705 [2024-07-14 21:08:47.168411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:35.705 passed 00:09:35.705 Test: blockdev nvme passthru rw ...passed 00:09:35.705 Test: blockdev nvme passthru vendor specific ...[2024-07-14 21:08:47.169285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:35.705 [2024-07-14 21:08:47.169340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:35.705 passed 00:09:35.705 Test: blockdev nvme admin passthru ...passed 00:09:35.705 Test: blockdev copy ...passed 00:09:35.705 Suite: bdevio tests on: Nvme2n1 00:09:35.705 Test: blockdev write read block ...passed 00:09:35.705 Test: blockdev write zeroes read block ...passed 00:09:35.705 Test: blockdev write zeroes read no split ...passed 00:09:35.705 Test: blockdev write zeroes read split ...passed 00:09:35.705 Test: blockdev write zeroes read split partial ...passed 00:09:35.705 Test: blockdev reset ...[2024-07-14 21:08:47.239854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:35.705 [2024-07-14 21:08:47.243928] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:35.705 passed 00:09:35.705 Test: blockdev write read 8 blocks ...passed 00:09:35.705 Test: blockdev write read size > 128k ...passed 00:09:35.705 Test: blockdev write read invalid size ...passed 00:09:35.705 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:35.705 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:35.705 Test: blockdev write read max offset ...passed 00:09:35.705 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:35.705 Test: blockdev writev readv 8 blocks ...passed 00:09:35.705 Test: blockdev writev readv 30 x 1block ...passed 00:09:35.705 Test: blockdev writev readv block ...passed 00:09:35.964 Test: blockdev writev readv size > 128k ...passed 00:09:35.964 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:35.964 Test: blockdev comparev and writev ...[2024-07-14 21:08:47.252474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x292630000 len:0x1000 00:09:35.964 [2024-07-14 21:08:47.252564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:35.964 passed 00:09:35.964 Test: blockdev nvme passthru rw ...passed 00:09:35.964 Test: blockdev nvme passthru vendor specific ...passed 00:09:35.964 Test: blockdev nvme admin passthru ...[2024-07-14 21:08:47.253343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:35.964 [2024-07-14 21:08:47.253386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:35.964 passed 00:09:35.964 Test: blockdev copy ...passed 00:09:35.964 Suite: bdevio tests on: Nvme1n1 00:09:35.964 Test: blockdev write read block ...passed 00:09:35.964 Test: blockdev write zeroes read block ...passed 00:09:35.964 Test: blockdev write zeroes read no split ...passed 00:09:35.964 Test: blockdev write zeroes read split ...passed 00:09:35.964 Test: blockdev write zeroes read split partial ...passed 00:09:35.964 Test: blockdev reset ...[2024-07-14 21:08:47.328984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:09:35.964 [2024-07-14 21:08:47.332636] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:35.964 passed 00:09:35.964 Test: blockdev write read 8 blocks ...passed 00:09:35.964 Test: blockdev write read size > 128k ...passed 00:09:35.964 Test: blockdev write read invalid size ...passed 00:09:35.964 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:35.964 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:35.964 Test: blockdev write read max offset ...passed 00:09:35.964 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:35.964 Test: blockdev writev readv 8 blocks ...passed 00:09:35.964 Test: blockdev writev readv 30 x 1block ...passed 00:09:35.964 Test: blockdev writev readv block ...passed 00:09:35.964 Test: blockdev writev readv size > 128k ...passed 00:09:35.964 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:35.964 Test: blockdev comparev and writev ...[2024-07-14 21:08:47.341814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27120e000 len:0x1000 00:09:35.964 [2024-07-14 21:08:47.341911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:35.964 passed 00:09:35.964 Test: blockdev nvme passthru rw ...passed 00:09:35.964 Test: blockdev nvme passthru vendor specific ...[2024-07-14 21:08:47.342666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:35.964 [2024-07-14 21:08:47.342752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:35.964 passed 00:09:35.964 Test: blockdev nvme admin passthru ...passed 00:09:35.964 Test: blockdev copy ...passed 00:09:35.964 Suite: bdevio tests on: Nvme0n1p2 00:09:35.964 Test: blockdev write read block ...passed 00:09:35.964 Test: blockdev write zeroes read block ...passed 00:09:35.964 Test: blockdev write zeroes read no split ...passed 00:09:35.964 Test: blockdev write zeroes read split ...passed 00:09:35.964 Test: blockdev write zeroes read split partial ...passed 00:09:35.964 Test: blockdev reset ...[2024-07-14 21:08:47.421167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:35.964 [2024-07-14 21:08:47.424960] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:35.964 passed 00:09:35.964 Test: blockdev write read 8 blocks ...passed 00:09:35.964 Test: blockdev write read size > 128k ...passed 00:09:35.964 Test: blockdev write read invalid size ...passed 00:09:35.964 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:35.964 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:35.964 Test: blockdev write read max offset ...passed 00:09:35.964 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:35.964 Test: blockdev writev readv 8 blocks ...passed 00:09:35.964 Test: blockdev writev readv 30 x 1block ...passed 00:09:35.964 Test: blockdev writev readv block ...passed 00:09:35.964 Test: blockdev writev readv size > 128k ...passed 00:09:35.964 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:35.964 Test: blockdev comparev and writev ...passed 00:09:35.964 Test: blockdev nvme passthru rw ...passed 00:09:35.964 Test: blockdev nvme passthru vendor specific ...passed 00:09:35.964 Test: blockdev nvme admin passthru ...passed 00:09:35.964 Test: blockdev copy ...[2024-07-14 21:08:47.433497] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:09:35.964 separate metadata which is not supported yet. 00:09:35.964 passed 00:09:35.964 Suite: bdevio tests on: Nvme0n1p1 00:09:35.964 Test: blockdev write read block ...passed 00:09:35.964 Test: blockdev write zeroes read block ...passed 00:09:35.964 Test: blockdev write zeroes read no split ...passed 00:09:35.964 Test: blockdev write zeroes read split ...passed 00:09:35.964 Test: blockdev write zeroes read split partial ...passed 00:09:35.964 Test: blockdev reset ...[2024-07-14 21:08:47.500580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:35.964 [2024-07-14 21:08:47.504231] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:35.964 passed 00:09:35.964 Test: blockdev write read 8 blocks ...passed 00:09:35.964 Test: blockdev write read size > 128k ...passed 00:09:35.964 Test: blockdev write read invalid size ...passed 00:09:35.964 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:35.964 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:35.964 Test: blockdev write read max offset ...passed 00:09:35.964 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:35.964 Test: blockdev writev readv 8 blocks ...passed 00:09:36.223 Test: blockdev writev readv 30 x 1block ...passed 00:09:36.223 Test: blockdev writev readv block ...passed 00:09:36.223 Test: blockdev writev readv size > 128k ...passed 00:09:36.223 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:36.223 Test: blockdev comparev and writev ...passed 00:09:36.223 Test: blockdev nvme passthru rw ...passed 00:09:36.223 Test: blockdev nvme passthru vendor specific ...passed 00:09:36.223 Test: blockdev nvme admin passthru ...passed 00:09:36.223 Test: blockdev copy ...[2024-07-14 21:08:47.511829] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:09:36.223 separate metadata which is not supported yet. 00:09:36.223 passed 00:09:36.223 00:09:36.223 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.223 suites 7 7 n/a 0 0 00:09:36.223 tests 161 161 161 0 0 00:09:36.223 asserts 1006 1006 1006 0 n/a 00:09:36.223 00:09:36.223 Elapsed time = 1.529 seconds 00:09:36.223 0 00:09:36.223 21:08:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 67619 00:09:36.223 21:08:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 67619 ']' 00:09:36.223 21:08:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 67619 00:09:36.223 21:08:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:09:36.223 21:08:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:36.223 21:08:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67619 00:09:36.223 21:08:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:36.223 21:08:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:36.223 killing process with pid 67619 00:09:36.223 21:08:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67619' 00:09:36.223 21:08:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@967 -- # kill 67619 00:09:36.223 21:08:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # wait 67619 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:09:37.161 00:09:37.161 real 0m2.601s 00:09:37.161 user 0m6.478s 00:09:37.161 sys 0m0.350s 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:37.161 ************************************ 00:09:37.161 END TEST bdev_bounds 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:37.161 ************************************ 00:09:37.161 21:08:48 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:37.161 21:08:48 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:37.161 21:08:48 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:37.161 21:08:48 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:37.161 21:08:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:37.161 ************************************ 00:09:37.161 START TEST bdev_nbd 00:09:37.161 ************************************ 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=7 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=7 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=67679 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 67679 /var/tmp/spdk-nbd.sock 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 67679 ']' 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:37.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:37.161 21:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:37.161 [2024-07-14 21:08:48.534638] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:37.161 [2024-07-14 21:08:48.534795] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.161 [2024-07-14 21:08:48.696231] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.419 [2024-07-14 21:08:48.848152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.988 21:08:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:37.988 21:08:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:09:37.988 21:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:37.988 21:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.988 21:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:37.988 21:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:37.988 21:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:37.988 21:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.988 21:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:37.988 21:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:37.988 21:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:37.988 21:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:37.988 21:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:37.988 21:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:37.988 21:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:09:38.247 21:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:38.247 21:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:38.247 21:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:38.247 21:08:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:38.247 21:08:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:38.247 21:08:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:38.247 21:08:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:38.247 21:08:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:38.247 21:08:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:38.247 21:08:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:38.247 21:08:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:38.247 21:08:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:38.247 1+0 records in 00:09:38.247 1+0 records out 00:09:38.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000652999 s, 6.3 MB/s 00:09:38.247 21:08:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:38.247 21:08:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:38.247 21:08:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:38.247 21:08:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:38.247 21:08:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:38.247 21:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:38.247 21:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:38.247 21:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:38.815 1+0 records in 00:09:38.815 1+0 records out 00:09:38.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050838 s, 8.1 MB/s 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:38.815 1+0 records in 00:09:38.815 1+0 records out 00:09:38.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000667307 s, 6.1 MB/s 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:38.815 21:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:39.074 21:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:39.074 21:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:39.074 21:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:39.074 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:09:39.074 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:39.074 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:39.074 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:39.074 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:09:39.074 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:39.074 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:39.074 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:39.074 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:39.074 1+0 records in 00:09:39.074 1+0 records out 00:09:39.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000644172 s, 6.4 MB/s 00:09:39.074 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:39.074 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:39.074 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:39.334 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:39.334 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:39.334 21:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:39.334 21:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:39.334 21:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:39.592 21:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:39.592 21:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:39.592 21:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:39.592 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:09:39.592 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:39.592 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:39.592 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:39.592 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:09:39.592 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:39.592 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:39.592 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:39.592 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:39.592 1+0 records in 00:09:39.592 1+0 records out 00:09:39.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000628606 s, 6.5 MB/s 00:09:39.592 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:39.592 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:39.593 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:39.593 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:39.593 21:08:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:39.593 21:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:39.593 21:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:39.593 21:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:39.851 21:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:39.851 21:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:39.852 21:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:39.852 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:09:39.852 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:39.852 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:39.852 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:39.852 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:09:39.852 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:39.852 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:39.852 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:39.852 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:39.852 1+0 records in 00:09:39.852 1+0 records out 00:09:39.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000718587 s, 5.7 MB/s 00:09:39.852 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:39.852 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:39.852 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:39.852 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:39.852 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:39.852 21:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:39.852 21:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:39.852 21:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:40.111 21:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:40.111 21:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:40.111 21:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:40.111 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:09:40.111 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:40.111 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:40.111 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:40.111 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:09:40.111 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:40.111 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:40.111 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:40.111 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:40.111 1+0 records in 00:09:40.111 1+0 records out 00:09:40.111 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010262 s, 4.0 MB/s 00:09:40.111 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:40.111 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:40.111 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:40.111 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:40.111 21:08:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:40.111 21:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:40.111 21:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:40.111 21:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:40.370 21:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:40.370 { 00:09:40.370 "nbd_device": "/dev/nbd0", 00:09:40.370 "bdev_name": "Nvme0n1p1" 00:09:40.370 }, 00:09:40.370 { 00:09:40.370 "nbd_device": "/dev/nbd1", 00:09:40.370 "bdev_name": "Nvme0n1p2" 00:09:40.370 }, 00:09:40.370 { 00:09:40.370 "nbd_device": "/dev/nbd2", 00:09:40.370 "bdev_name": "Nvme1n1" 00:09:40.370 }, 00:09:40.370 { 00:09:40.370 "nbd_device": "/dev/nbd3", 00:09:40.370 "bdev_name": "Nvme2n1" 00:09:40.370 }, 00:09:40.370 { 00:09:40.370 "nbd_device": "/dev/nbd4", 00:09:40.370 "bdev_name": "Nvme2n2" 00:09:40.370 }, 00:09:40.370 { 00:09:40.370 "nbd_device": "/dev/nbd5", 00:09:40.370 "bdev_name": "Nvme2n3" 00:09:40.370 }, 00:09:40.370 { 00:09:40.370 "nbd_device": "/dev/nbd6", 00:09:40.370 "bdev_name": "Nvme3n1" 00:09:40.370 } 00:09:40.370 ]' 00:09:40.370 21:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:40.370 21:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:40.370 21:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:40.370 { 00:09:40.370 "nbd_device": "/dev/nbd0", 00:09:40.370 "bdev_name": "Nvme0n1p1" 00:09:40.370 }, 00:09:40.370 { 00:09:40.370 "nbd_device": "/dev/nbd1", 00:09:40.370 "bdev_name": "Nvme0n1p2" 00:09:40.370 }, 00:09:40.370 { 00:09:40.370 "nbd_device": "/dev/nbd2", 00:09:40.370 "bdev_name": "Nvme1n1" 00:09:40.370 }, 00:09:40.370 { 00:09:40.370 "nbd_device": "/dev/nbd3", 00:09:40.370 "bdev_name": "Nvme2n1" 00:09:40.370 }, 00:09:40.370 { 00:09:40.370 "nbd_device": "/dev/nbd4", 00:09:40.370 "bdev_name": "Nvme2n2" 00:09:40.370 }, 00:09:40.370 { 00:09:40.370 "nbd_device": "/dev/nbd5", 00:09:40.370 "bdev_name": "Nvme2n3" 00:09:40.370 }, 00:09:40.370 { 00:09:40.370 "nbd_device": "/dev/nbd6", 00:09:40.370 "bdev_name": "Nvme3n1" 00:09:40.370 } 00:09:40.370 ]' 00:09:40.370 21:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:40.370 21:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:40.370 21:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:40.370 21:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:40.370 21:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:40.370 21:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:40.370 21:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:40.630 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:40.630 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:40.630 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:40.630 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:40.630 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:40.630 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:40.630 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:40.630 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:40.630 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:40.630 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:40.889 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:40.889 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:40.889 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:40.889 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:40.889 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:40.889 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:40.889 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:40.889 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:40.889 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:40.889 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:41.149 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:41.149 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:41.149 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:41.149 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.149 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.149 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:41.149 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:41.149 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.149 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:41.149 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:41.408 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:41.408 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:41.408 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:41.408 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.408 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.408 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:41.408 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:41.408 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.408 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:41.408 21:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:41.668 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:41.668 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:41.668 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:41.668 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.668 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.668 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:41.668 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:41.668 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.668 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:41.668 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:41.927 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:41.927 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:41.927 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:41.927 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.927 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.927 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:41.927 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:41.927 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.927 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:41.927 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:41.927 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:42.185 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:42.185 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:42.185 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:42.185 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:42.185 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:42.185 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:42.185 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:42.185 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:42.185 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.185 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:42.444 21:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:09:42.702 /dev/nbd0 00:09:42.702 21:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:42.702 21:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:42.702 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:42.702 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:42.702 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:42.702 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:42.702 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:42.702 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:42.702 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:42.702 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:42.702 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:42.702 1+0 records in 00:09:42.702 1+0 records out 00:09:42.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000492434 s, 8.3 MB/s 00:09:42.702 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.702 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:42.702 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.702 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:42.702 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:42.702 21:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:42.702 21:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:42.702 21:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:09:42.960 /dev/nbd1 00:09:42.960 21:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:42.960 21:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:42.960 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:42.961 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:42.961 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:42.961 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:42.961 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:42.961 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:42.961 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:42.961 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:42.961 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:42.961 1+0 records in 00:09:42.961 1+0 records out 00:09:42.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000823967 s, 5.0 MB/s 00:09:42.961 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.961 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:42.961 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.961 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:42.961 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:42.961 21:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:42.961 21:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:42.961 21:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:09:43.219 /dev/nbd10 00:09:43.219 21:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:43.219 21:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:43.220 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:09:43.220 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:43.220 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:43.220 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:43.220 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:09:43.220 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:43.220 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:43.220 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:43.220 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:43.220 1+0 records in 00:09:43.220 1+0 records out 00:09:43.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000714395 s, 5.7 MB/s 00:09:43.220 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:43.220 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:43.220 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:43.220 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:43.220 21:08:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:43.220 21:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:43.220 21:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:43.220 21:08:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:43.478 /dev/nbd11 00:09:43.737 21:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:43.737 21:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:43.737 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:09:43.737 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:43.737 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:43.737 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:43.737 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:09:43.737 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:43.737 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:43.737 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:43.737 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:43.737 1+0 records in 00:09:43.737 1+0 records out 00:09:43.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00256768 s, 1.6 MB/s 00:09:43.737 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:43.737 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:43.737 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:43.737 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:43.737 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:43.737 21:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:43.737 21:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:43.737 21:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:43.737 /dev/nbd12 00:09:43.996 21:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:43.996 21:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:43.996 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:09:43.996 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:43.996 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:43.996 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:43.996 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:09:43.996 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:43.996 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:43.996 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:43.996 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:43.996 1+0 records in 00:09:43.996 1+0 records out 00:09:43.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000960627 s, 4.3 MB/s 00:09:43.996 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:43.997 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:43.997 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:43.997 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:43.997 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:43.997 21:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:43.997 21:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:43.997 21:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:43.997 /dev/nbd13 00:09:43.997 21:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:43.997 21:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:43.997 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:09:43.997 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:43.997 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:43.997 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:43.997 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:09:43.997 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:43.997 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:43.997 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:44.256 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:44.256 1+0 records in 00:09:44.256 1+0 records out 00:09:44.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0008912 s, 4.6 MB/s 00:09:44.256 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:44.256 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:44.256 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:44.256 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:44.256 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:44.256 21:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:44.256 21:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:44.256 21:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:44.515 /dev/nbd14 00:09:44.515 21:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:44.515 21:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:44.515 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:09:44.515 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:44.515 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:44.515 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:44.515 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:09:44.515 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:44.515 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:44.515 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:44.515 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:44.515 1+0 records in 00:09:44.515 1+0 records out 00:09:44.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105366 s, 3.9 MB/s 00:09:44.515 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:44.515 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:44.515 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:44.515 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:44.515 21:08:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:44.515 21:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:44.515 21:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:44.515 21:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:44.515 21:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:44.515 21:08:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:44.774 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:44.774 { 00:09:44.774 "nbd_device": "/dev/nbd0", 00:09:44.774 "bdev_name": "Nvme0n1p1" 00:09:44.774 }, 00:09:44.774 { 00:09:44.774 "nbd_device": "/dev/nbd1", 00:09:44.774 "bdev_name": "Nvme0n1p2" 00:09:44.774 }, 00:09:44.774 { 00:09:44.774 "nbd_device": "/dev/nbd10", 00:09:44.774 "bdev_name": "Nvme1n1" 00:09:44.774 }, 00:09:44.774 { 00:09:44.774 "nbd_device": "/dev/nbd11", 00:09:44.774 "bdev_name": "Nvme2n1" 00:09:44.774 }, 00:09:44.774 { 00:09:44.774 "nbd_device": "/dev/nbd12", 00:09:44.774 "bdev_name": "Nvme2n2" 00:09:44.774 }, 00:09:44.774 { 00:09:44.774 "nbd_device": "/dev/nbd13", 00:09:44.774 "bdev_name": "Nvme2n3" 00:09:44.774 }, 00:09:44.774 { 00:09:44.774 "nbd_device": "/dev/nbd14", 00:09:44.774 "bdev_name": "Nvme3n1" 00:09:44.774 } 00:09:44.774 ]' 00:09:44.774 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:44.774 { 00:09:44.774 "nbd_device": "/dev/nbd0", 00:09:44.774 "bdev_name": "Nvme0n1p1" 00:09:44.774 }, 00:09:44.774 { 00:09:44.774 "nbd_device": "/dev/nbd1", 00:09:44.774 "bdev_name": "Nvme0n1p2" 00:09:44.774 }, 00:09:44.774 { 00:09:44.774 "nbd_device": "/dev/nbd10", 00:09:44.774 "bdev_name": "Nvme1n1" 00:09:44.774 }, 00:09:44.774 { 00:09:44.774 "nbd_device": "/dev/nbd11", 00:09:44.774 "bdev_name": "Nvme2n1" 00:09:44.774 }, 00:09:44.774 { 00:09:44.774 "nbd_device": "/dev/nbd12", 00:09:44.774 "bdev_name": "Nvme2n2" 00:09:44.774 }, 00:09:44.774 { 00:09:44.774 "nbd_device": "/dev/nbd13", 00:09:44.774 "bdev_name": "Nvme2n3" 00:09:44.774 }, 00:09:44.774 { 00:09:44.774 "nbd_device": "/dev/nbd14", 00:09:44.774 "bdev_name": "Nvme3n1" 00:09:44.774 } 00:09:44.774 ]' 00:09:44.774 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:44.774 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:44.774 /dev/nbd1 00:09:44.774 /dev/nbd10 00:09:44.774 /dev/nbd11 00:09:44.774 /dev/nbd12 00:09:44.774 /dev/nbd13 00:09:44.774 /dev/nbd14' 00:09:44.774 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:44.774 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:44.774 /dev/nbd1 00:09:44.774 /dev/nbd10 00:09:44.774 /dev/nbd11 00:09:44.774 /dev/nbd12 00:09:44.774 /dev/nbd13 00:09:44.774 /dev/nbd14' 00:09:44.774 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:09:44.774 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:09:44.774 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:09:44.774 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:44.774 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:44.774 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:44.774 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:44.774 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:44.774 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:44.774 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:44.774 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:44.774 256+0 records in 00:09:44.774 256+0 records out 00:09:44.774 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00805538 s, 130 MB/s 00:09:44.774 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:44.774 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:45.033 256+0 records in 00:09:45.033 256+0 records out 00:09:45.033 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157067 s, 6.7 MB/s 00:09:45.033 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:45.033 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:45.033 256+0 records in 00:09:45.033 256+0 records out 00:09:45.033 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166947 s, 6.3 MB/s 00:09:45.033 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:45.033 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:45.291 256+0 records in 00:09:45.292 256+0 records out 00:09:45.292 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175824 s, 6.0 MB/s 00:09:45.292 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:45.292 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:45.551 256+0 records in 00:09:45.551 256+0 records out 00:09:45.551 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.182022 s, 5.8 MB/s 00:09:45.551 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:45.551 21:08:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:45.551 256+0 records in 00:09:45.551 256+0 records out 00:09:45.551 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.186133 s, 5.6 MB/s 00:09:45.551 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:45.551 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:45.811 256+0 records in 00:09:45.811 256+0 records out 00:09:45.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164661 s, 6.4 MB/s 00:09:45.811 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:45.811 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:46.069 256+0 records in 00:09:46.069 256+0 records out 00:09:46.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163496 s, 6.4 MB/s 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.069 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:46.328 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:46.328 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:46.328 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:46.328 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:46.328 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.328 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:46.328 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:46.328 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:46.328 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.328 21:08:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:46.587 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:46.587 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:46.587 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:46.587 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:46.587 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.587 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:46.587 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:46.587 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:46.587 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.587 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:46.845 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:46.845 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:46.845 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:46.845 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:46.845 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.845 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:46.845 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:46.845 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:46.845 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.845 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:47.103 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:47.103 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:47.103 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:47.103 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.103 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.103 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:47.103 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:47.103 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:47.103 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:47.103 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:47.361 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:47.362 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:47.362 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:47.362 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.362 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.362 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:47.362 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:47.362 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:47.362 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:47.362 21:08:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:47.620 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:47.620 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:47.620 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:47.620 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.620 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.620 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:47.620 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:47.620 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:47.620 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:47.620 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:47.878 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:47.878 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:47.878 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:47.878 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.878 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.878 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:47.878 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:47.878 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:47.878 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:47.878 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.878 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:48.136 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:48.136 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:48.136 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:48.136 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:48.136 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:48.136 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:48.136 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:48.136 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:48.136 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:48.136 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:48.136 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:48.136 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:48.136 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:48.136 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.136 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:48.136 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:48.136 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:48.136 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:48.394 malloc_lvol_verify 00:09:48.395 21:08:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:48.653 60065086-d55d-4234-9bca-2223dc0f76fb 00:09:48.653 21:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:48.911 e8c28a4d-5b91-4ca4-a0de-dd1cbf7c610f 00:09:48.911 21:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:49.170 /dev/nbd0 00:09:49.170 21:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:49.170 mke2fs 1.46.5 (30-Dec-2021) 00:09:49.170 Discarding device blocks: 0/4096 done 00:09:49.170 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:49.170 00:09:49.170 Allocating group tables: 0/1 done 00:09:49.170 Writing inode tables: 0/1 done 00:09:49.170 Creating journal (1024 blocks): done 00:09:49.170 Writing superblocks and filesystem accounting information: 0/1 done 00:09:49.170 00:09:49.170 21:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:49.170 21:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:49.170 21:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:49.170 21:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:49.170 21:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:49.170 21:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:49.170 21:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:49.170 21:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:49.431 21:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:49.432 21:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:49.432 21:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:49.432 21:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:49.432 21:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:49.432 21:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:49.432 21:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:49.432 21:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:49.432 21:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:49.432 21:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:09:49.432 21:09:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 67679 00:09:49.432 21:09:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 67679 ']' 00:09:49.432 21:09:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 67679 00:09:49.432 21:09:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:09:49.432 21:09:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:49.432 21:09:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67679 00:09:49.432 21:09:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:49.432 21:09:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:49.432 21:09:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67679' 00:09:49.432 killing process with pid 67679 00:09:49.432 21:09:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@967 -- # kill 67679 00:09:49.432 21:09:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # wait 67679 00:09:50.403 21:09:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:09:50.403 00:09:50.403 real 0m13.469s 00:09:50.403 user 0m19.097s 00:09:50.403 sys 0m4.288s 00:09:50.403 21:09:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:50.403 21:09:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:50.403 ************************************ 00:09:50.403 END TEST bdev_nbd 00:09:50.403 ************************************ 00:09:50.662 21:09:01 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:50.662 21:09:01 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:09:50.662 21:09:01 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:09:50.662 21:09:01 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:09:50.662 skipping fio tests on NVMe due to multi-ns failures. 00:09:50.662 21:09:01 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:50.662 21:09:01 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:50.662 21:09:01 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:50.662 21:09:01 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:09:50.662 21:09:01 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.662 21:09:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:50.662 ************************************ 00:09:50.662 START TEST bdev_verify 00:09:50.662 ************************************ 00:09:50.662 21:09:01 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:50.662 [2024-07-14 21:09:02.082991] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:50.662 [2024-07-14 21:09:02.083208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68118 ] 00:09:50.921 [2024-07-14 21:09:02.256012] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:51.181 [2024-07-14 21:09:02.472077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.181 [2024-07-14 21:09:02.472094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.751 Running I/O for 5 seconds... 00:09:57.019 00:09:57.019 Latency(us) 00:09:57.019 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.019 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:57.019 Verification LBA range: start 0x0 length 0x5e800 00:09:57.019 Nvme0n1p1 : 5.06 1365.98 5.34 0.00 0.00 93508.47 20375.74 81979.58 00:09:57.019 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:57.019 Verification LBA range: start 0x5e800 length 0x5e800 00:09:57.019 Nvme0n1p1 : 5.07 1288.58 5.03 0.00 0.00 99051.23 24188.74 96754.97 00:09:57.019 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:57.019 Verification LBA range: start 0x0 length 0x5e7ff 00:09:57.019 Nvme0n1p2 : 5.06 1365.43 5.33 0.00 0.00 93413.41 23235.49 77689.95 00:09:57.019 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:57.019 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:09:57.019 Nvme0n1p2 : 5.07 1287.57 5.03 0.00 0.00 98834.28 26929.34 95325.09 00:09:57.019 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:57.019 Verification LBA range: start 0x0 length 0xa0000 00:09:57.019 Nvme1n1 : 5.06 1364.91 5.33 0.00 0.00 93314.59 22520.55 73400.32 00:09:57.019 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:57.019 Verification LBA range: start 0xa0000 length 0xa0000 00:09:57.019 Nvme1n1 : 5.07 1286.69 5.03 0.00 0.00 98687.17 29550.78 91988.71 00:09:57.019 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:57.019 Verification LBA range: start 0x0 length 0x80000 00:09:57.019 Nvme2n1 : 5.07 1364.37 5.33 0.00 0.00 93192.51 21328.99 74830.20 00:09:57.019 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:57.019 Verification LBA range: start 0x80000 length 0x80000 00:09:57.019 Nvme2n1 : 5.08 1285.95 5.02 0.00 0.00 98555.66 30146.56 90082.21 00:09:57.019 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:57.019 Verification LBA range: start 0x0 length 0x80000 00:09:57.019 Nvme2n2 : 5.07 1363.36 5.33 0.00 0.00 93087.85 22639.71 75783.45 00:09:57.019 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:57.019 Verification LBA range: start 0x80000 length 0x80000 00:09:57.020 Nvme2n2 : 5.08 1285.39 5.02 0.00 0.00 98411.01 23831.27 87222.46 00:09:57.020 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:57.020 Verification LBA range: start 0x0 length 0x80000 00:09:57.020 Nvme2n3 : 5.07 1362.48 5.32 0.00 0.00 92971.68 19779.96 77689.95 00:09:57.020 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:57.020 Verification LBA range: start 0x80000 length 0x80000 00:09:57.020 Nvme2n3 : 5.09 1295.66 5.06 0.00 0.00 97605.88 4259.84 91988.71 00:09:57.020 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:57.020 Verification LBA range: start 0x0 length 0x20000 00:09:57.020 Nvme3n1 : 5.08 1361.77 5.32 0.00 0.00 92841.75 12690.15 81026.33 00:09:57.020 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:57.020 Verification LBA range: start 0x20000 length 0x20000 00:09:57.020 Nvme3n1 : 5.09 1295.22 5.06 0.00 0.00 97506.69 4170.47 95801.72 00:09:57.020 =================================================================================================================== 00:09:57.020 Total : 18573.35 72.55 0.00 0.00 95712.82 4170.47 96754.97 00:09:57.955 00:09:57.955 real 0m7.491s 00:09:57.955 user 0m13.651s 00:09:57.955 sys 0m0.273s 00:09:57.955 21:09:09 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:57.955 21:09:09 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:57.955 ************************************ 00:09:57.955 END TEST bdev_verify 00:09:57.955 ************************************ 00:09:58.213 21:09:09 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:58.213 21:09:09 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:58.213 21:09:09 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:09:58.213 21:09:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.213 21:09:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:58.213 ************************************ 00:09:58.213 START TEST bdev_verify_big_io 00:09:58.213 ************************************ 00:09:58.213 21:09:09 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:58.213 [2024-07-14 21:09:09.597429] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:58.213 [2024-07-14 21:09:09.597564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68216 ] 00:09:58.213 [2024-07-14 21:09:09.756632] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:58.472 [2024-07-14 21:09:09.912893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.472 [2024-07-14 21:09:09.912909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.417 Running I/O for 5 seconds... 00:10:05.976 00:10:05.977 Latency(us) 00:10:05.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.977 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:05.977 Verification LBA range: start 0x0 length 0x5e80 00:10:05.977 Nvme0n1p1 : 5.77 110.15 6.88 0.00 0.00 1121930.98 20137.43 1159153.11 00:10:05.977 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:05.977 Verification LBA range: start 0x5e80 length 0x5e80 00:10:05.977 Nvme0n1p1 : 5.90 100.66 6.29 0.00 0.00 1216726.44 14537.08 1685347.61 00:10:05.977 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:05.977 Verification LBA range: start 0x0 length 0x5e7f 00:10:05.977 Nvme0n1p2 : 5.78 108.01 6.75 0.00 0.00 1111498.58 67204.19 1273543.21 00:10:05.977 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:05.977 Verification LBA range: start 0x5e7f length 0x5e7f 00:10:05.977 Nvme0n1p2 : 5.90 100.46 6.28 0.00 0.00 1180985.04 33363.78 1715851.64 00:10:05.977 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:05.977 Verification LBA range: start 0x0 length 0xa000 00:10:05.977 Nvme1n1 : 5.90 86.82 5.43 0.00 0.00 1326370.44 142034.39 2303054.20 00:10:05.977 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:05.977 Verification LBA range: start 0xa000 length 0xa000 00:10:05.977 Nvme1n1 : 5.90 104.32 6.52 0.00 0.00 1116319.49 55288.55 1746355.67 00:10:05.977 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:05.977 Verification LBA range: start 0x0 length 0x8000 00:10:05.977 Nvme2n1 : 5.95 109.20 6.83 0.00 0.00 1031877.98 112960.23 1060015.01 00:10:05.977 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:05.977 Verification LBA range: start 0x8000 length 0x8000 00:10:05.977 Nvme2n1 : 5.98 114.28 7.14 0.00 0.00 991657.29 69110.69 1166779.11 00:10:05.977 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:05.977 Verification LBA range: start 0x0 length 0x8000 00:10:05.977 Nvme2n2 : 5.98 122.87 7.68 0.00 0.00 906222.73 26810.18 1060015.01 00:10:05.977 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:05.977 Verification LBA range: start 0x8000 length 0x8000 00:10:05.977 Nvme2n2 : 5.98 109.55 6.85 0.00 0.00 1000480.85 69587.32 1814989.73 00:10:05.977 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:05.977 Verification LBA range: start 0x0 length 0x8000 00:10:05.977 Nvme2n3 : 6.03 127.42 7.96 0.00 0.00 847108.34 41704.73 999006.95 00:10:05.977 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:05.977 Verification LBA range: start 0x8000 length 0x8000 00:10:05.977 Nvme2n3 : 6.05 118.03 7.38 0.00 0.00 904208.39 25856.93 1845493.76 00:10:05.977 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:05.977 Verification LBA range: start 0x0 length 0x2000 00:10:05.977 Nvme3n1 : 6.07 142.94 8.93 0.00 0.00 736401.75 2815.07 1029510.98 00:10:05.977 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:05.977 Verification LBA range: start 0x2000 length 0x2000 00:10:05.977 Nvme3n1 : 6.09 139.55 8.72 0.00 0.00 746170.23 826.65 1868371.78 00:10:05.977 =================================================================================================================== 00:10:05.977 Total : 1594.27 99.64 0.00 0.00 993469.57 826.65 2303054.20 00:10:06.913 00:10:06.913 real 0m8.800s 00:10:06.913 user 0m16.402s 00:10:06.913 sys 0m0.229s 00:10:06.913 21:09:18 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:06.913 21:09:18 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:06.913 ************************************ 00:10:06.913 END TEST bdev_verify_big_io 00:10:06.913 ************************************ 00:10:06.913 21:09:18 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:10:06.913 21:09:18 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:06.913 21:09:18 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:06.913 21:09:18 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:06.913 21:09:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:06.913 ************************************ 00:10:06.913 START TEST bdev_write_zeroes 00:10:06.913 ************************************ 00:10:06.913 21:09:18 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:06.913 [2024-07-14 21:09:18.457476] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:06.913 [2024-07-14 21:09:18.457650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68332 ] 00:10:07.172 [2024-07-14 21:09:18.617478] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.431 [2024-07-14 21:09:18.768703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.999 Running I/O for 1 seconds... 00:10:08.933 00:10:08.933 Latency(us) 00:10:08.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.933 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:08.933 Nvme0n1p1 : 1.02 6905.23 26.97 0.00 0.00 18455.18 14000.87 28001.75 00:10:08.933 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:08.933 Nvme0n1p2 : 1.02 6894.01 26.93 0.00 0.00 18449.52 14358.34 27763.43 00:10:08.933 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:08.933 Nvme1n1 : 1.02 6883.62 26.89 0.00 0.00 18390.25 14954.12 23592.96 00:10:08.933 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:08.933 Nvme2n1 : 1.03 6922.41 27.04 0.00 0.00 18304.97 11021.96 22758.87 00:10:08.933 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:08.933 Nvme2n2 : 1.03 6911.96 27.00 0.00 0.00 18277.28 11319.85 22163.08 00:10:08.933 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:08.933 Nvme2n3 : 1.03 6901.66 26.96 0.00 0.00 18249.69 11021.96 21448.15 00:10:08.933 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:08.933 Nvme3n1 : 1.03 6891.51 26.92 0.00 0.00 18228.75 10009.13 21448.15 00:10:08.933 =================================================================================================================== 00:10:08.933 Total : 48310.39 188.71 0.00 0.00 18336.15 10009.13 28001.75 00:10:10.321 00:10:10.321 real 0m3.236s 00:10:10.321 user 0m2.906s 00:10:10.321 sys 0m0.208s 00:10:10.321 21:09:21 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:10.321 ************************************ 00:10:10.321 END TEST bdev_write_zeroes 00:10:10.321 ************************************ 00:10:10.321 21:09:21 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:10.321 21:09:21 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:10:10.321 21:09:21 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:10.321 21:09:21 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:10.321 21:09:21 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:10.321 21:09:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:10.321 ************************************ 00:10:10.321 START TEST bdev_json_nonenclosed 00:10:10.321 ************************************ 00:10:10.321 21:09:21 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:10.321 [2024-07-14 21:09:21.777822] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:10.321 [2024-07-14 21:09:21.778020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68385 ] 00:10:10.595 [2024-07-14 21:09:21.952911] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.853 [2024-07-14 21:09:22.174211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.853 [2024-07-14 21:09:22.174344] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:10.853 [2024-07-14 21:09:22.174368] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:10.853 [2024-07-14 21:09:22.174385] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:11.111 00:10:11.111 real 0m0.851s 00:10:11.111 user 0m0.614s 00:10:11.111 sys 0m0.130s 00:10:11.111 21:09:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:10:11.111 21:09:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:11.111 ************************************ 00:10:11.111 END TEST bdev_json_nonenclosed 00:10:11.111 ************************************ 00:10:11.111 21:09:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:11.111 21:09:22 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:10:11.111 21:09:22 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # true 00:10:11.111 21:09:22 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:11.111 21:09:22 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:11.111 21:09:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.111 21:09:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:11.111 ************************************ 00:10:11.111 START TEST bdev_json_nonarray 00:10:11.111 ************************************ 00:10:11.111 21:09:22 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:11.111 [2024-07-14 21:09:22.656087] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:11.112 [2024-07-14 21:09:22.656250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68416 ] 00:10:11.369 [2024-07-14 21:09:22.806517] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.628 [2024-07-14 21:09:22.962051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.628 [2024-07-14 21:09:22.962186] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:11.628 [2024-07-14 21:09:22.962210] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:11.628 [2024-07-14 21:09:22.962225] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:11.887 00:10:11.887 real 0m0.743s 00:10:11.887 user 0m0.525s 00:10:11.887 sys 0m0.113s 00:10:11.887 21:09:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:10:11.887 21:09:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:11.887 21:09:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:11.887 ************************************ 00:10:11.887 END TEST bdev_json_nonarray 00:10:11.887 ************************************ 00:10:11.887 21:09:23 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:10:11.887 21:09:23 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # true 00:10:11.887 21:09:23 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:10:11.887 21:09:23 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:10:11.887 21:09:23 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:10:11.887 21:09:23 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:11.887 21:09:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.887 21:09:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:11.887 ************************************ 00:10:11.887 START TEST bdev_gpt_uuid 00:10:11.887 ************************************ 00:10:11.887 21:09:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1123 -- # bdev_gpt_uuid 00:10:11.887 21:09:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:10:11.887 21:09:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:10:11.887 21:09:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=68442 00:10:11.887 21:09:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:11.887 21:09:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 68442 00:10:11.887 21:09:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:11.887 21:09:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@829 -- # '[' -z 68442 ']' 00:10:11.887 21:09:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.887 21:09:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:11.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.887 21:09:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.887 21:09:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:11.887 21:09:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:12.145 [2024-07-14 21:09:23.501558] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:12.145 [2024-07-14 21:09:23.501753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68442 ] 00:10:12.145 [2024-07-14 21:09:23.672168] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.403 [2024-07-14 21:09:23.830374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.970 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:12.970 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # return 0 00:10:12.970 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:12.970 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.970 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:13.229 Some configs were skipped because the RPC state that can call them passed over. 00:10:13.229 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.229 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:10:13.229 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.229 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:13.488 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.488 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:10:13.488 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.488 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:13.488 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.488 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:10:13.488 { 00:10:13.488 "name": "Nvme0n1p1", 00:10:13.488 "aliases": [ 00:10:13.488 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:10:13.488 ], 00:10:13.488 "product_name": "GPT Disk", 00:10:13.488 "block_size": 4096, 00:10:13.488 "num_blocks": 774144, 00:10:13.488 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:13.488 "md_size": 64, 00:10:13.488 "md_interleave": false, 00:10:13.488 "dif_type": 0, 00:10:13.488 "assigned_rate_limits": { 00:10:13.488 "rw_ios_per_sec": 0, 00:10:13.488 "rw_mbytes_per_sec": 0, 00:10:13.488 "r_mbytes_per_sec": 0, 00:10:13.488 "w_mbytes_per_sec": 0 00:10:13.488 }, 00:10:13.488 "claimed": false, 00:10:13.488 "zoned": false, 00:10:13.488 "supported_io_types": { 00:10:13.488 "read": true, 00:10:13.488 "write": true, 00:10:13.488 "unmap": true, 00:10:13.488 "flush": true, 00:10:13.488 "reset": true, 00:10:13.488 "nvme_admin": false, 00:10:13.488 "nvme_io": false, 00:10:13.488 "nvme_io_md": false, 00:10:13.488 "write_zeroes": true, 00:10:13.488 "zcopy": false, 00:10:13.488 "get_zone_info": false, 00:10:13.488 "zone_management": false, 00:10:13.488 "zone_append": false, 00:10:13.488 "compare": true, 00:10:13.488 "compare_and_write": false, 00:10:13.488 "abort": true, 00:10:13.488 "seek_hole": false, 00:10:13.488 "seek_data": false, 00:10:13.488 "copy": true, 00:10:13.488 "nvme_iov_md": false 00:10:13.488 }, 00:10:13.488 "driver_specific": { 00:10:13.488 "gpt": { 00:10:13.488 "base_bdev": "Nvme0n1", 00:10:13.488 "offset_blocks": 256, 00:10:13.488 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:10:13.488 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:13.488 "partition_name": "SPDK_TEST_first" 00:10:13.488 } 00:10:13.488 } 00:10:13.488 } 00:10:13.488 ]' 00:10:13.488 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:10:13.488 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:10:13.488 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:10:13.488 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:13.488 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:13.488 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:13.488 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:13.488 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.488 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:13.488 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.488 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:10:13.488 { 00:10:13.488 "name": "Nvme0n1p2", 00:10:13.488 "aliases": [ 00:10:13.488 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:10:13.488 ], 00:10:13.488 "product_name": "GPT Disk", 00:10:13.488 "block_size": 4096, 00:10:13.488 "num_blocks": 774143, 00:10:13.488 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:13.488 "md_size": 64, 00:10:13.488 "md_interleave": false, 00:10:13.488 "dif_type": 0, 00:10:13.488 "assigned_rate_limits": { 00:10:13.488 "rw_ios_per_sec": 0, 00:10:13.488 "rw_mbytes_per_sec": 0, 00:10:13.488 "r_mbytes_per_sec": 0, 00:10:13.488 "w_mbytes_per_sec": 0 00:10:13.488 }, 00:10:13.488 "claimed": false, 00:10:13.488 "zoned": false, 00:10:13.488 "supported_io_types": { 00:10:13.488 "read": true, 00:10:13.488 "write": true, 00:10:13.488 "unmap": true, 00:10:13.488 "flush": true, 00:10:13.488 "reset": true, 00:10:13.488 "nvme_admin": false, 00:10:13.488 "nvme_io": false, 00:10:13.488 "nvme_io_md": false, 00:10:13.488 "write_zeroes": true, 00:10:13.488 "zcopy": false, 00:10:13.488 "get_zone_info": false, 00:10:13.488 "zone_management": false, 00:10:13.488 "zone_append": false, 00:10:13.488 "compare": true, 00:10:13.488 "compare_and_write": false, 00:10:13.488 "abort": true, 00:10:13.488 "seek_hole": false, 00:10:13.488 "seek_data": false, 00:10:13.488 "copy": true, 00:10:13.488 "nvme_iov_md": false 00:10:13.488 }, 00:10:13.488 "driver_specific": { 00:10:13.488 "gpt": { 00:10:13.488 "base_bdev": "Nvme0n1", 00:10:13.488 "offset_blocks": 774400, 00:10:13.488 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:10:13.488 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:13.488 "partition_name": "SPDK_TEST_second" 00:10:13.488 } 00:10:13.488 } 00:10:13.488 } 00:10:13.488 ]' 00:10:13.488 21:09:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:10:13.747 21:09:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:10:13.747 21:09:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:10:13.747 21:09:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:13.747 21:09:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:13.747 21:09:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:13.747 21:09:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 68442 00:10:13.747 21:09:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@948 -- # '[' -z 68442 ']' 00:10:13.747 21:09:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # kill -0 68442 00:10:13.747 21:09:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # uname 00:10:13.747 21:09:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:13.747 21:09:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68442 00:10:13.747 21:09:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:13.747 21:09:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:13.747 killing process with pid 68442 00:10:13.747 21:09:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68442' 00:10:13.748 21:09:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@967 -- # kill 68442 00:10:13.748 21:09:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # wait 68442 00:10:15.652 00:10:15.652 real 0m3.521s 00:10:15.652 user 0m3.812s 00:10:15.652 sys 0m0.437s 00:10:15.652 21:09:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:15.653 21:09:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:15.653 ************************************ 00:10:15.653 END TEST bdev_gpt_uuid 00:10:15.653 ************************************ 00:10:15.653 21:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:10:15.653 21:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:10:15.653 21:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:10:15.653 21:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:10:15.653 21:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:15.653 21:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:15.653 21:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:10:15.653 21:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:10:15.653 21:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:10:15.653 21:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:15.911 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:16.170 Waiting for block devices as requested 00:10:16.170 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:16.170 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:16.170 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:16.430 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:21.704 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:21.704 21:09:32 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme1n1 ]] 00:10:21.704 21:09:32 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme1n1 00:10:21.704 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:10:21.704 /dev/nvme1n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:10:21.704 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:10:21.704 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:10:21.704 21:09:33 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:10:21.704 00:10:21.704 real 1m2.170s 00:10:21.704 user 1m19.286s 00:10:21.704 sys 0m9.324s 00:10:21.704 21:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:21.704 21:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:21.704 ************************************ 00:10:21.704 END TEST blockdev_nvme_gpt 00:10:21.704 ************************************ 00:10:21.704 21:09:33 -- common/autotest_common.sh@1142 -- # return 0 00:10:21.704 21:09:33 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:21.704 21:09:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:21.704 21:09:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:21.704 21:09:33 -- common/autotest_common.sh@10 -- # set +x 00:10:21.704 ************************************ 00:10:21.704 START TEST nvme 00:10:21.704 ************************************ 00:10:21.704 21:09:33 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:21.962 * Looking for test storage... 00:10:21.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:21.962 21:09:33 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:22.528 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:23.096 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:23.096 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:23.096 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:23.096 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:23.096 21:09:34 nvme -- nvme/nvme.sh@79 -- # uname 00:10:23.096 21:09:34 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:10:23.096 21:09:34 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:10:23.096 21:09:34 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:10:23.096 21:09:34 nvme -- common/autotest_common.sh@1080 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:10:23.096 21:09:34 nvme -- common/autotest_common.sh@1066 -- # _randomize_va_space=2 00:10:23.096 21:09:34 nvme -- common/autotest_common.sh@1067 -- # echo 0 00:10:23.096 21:09:34 nvme -- common/autotest_common.sh@1069 -- # stubpid=69079 00:10:23.096 21:09:34 nvme -- common/autotest_common.sh@1070 -- # echo Waiting for stub to ready for secondary processes... 00:10:23.096 Waiting for stub to ready for secondary processes... 00:10:23.096 21:09:34 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:23.096 21:09:34 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/69079 ]] 00:10:23.096 21:09:34 nvme -- common/autotest_common.sh@1068 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:10:23.096 21:09:34 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:10:23.096 [2024-07-14 21:09:34.568134] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:23.096 [2024-07-14 21:09:34.568323] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:10:24.032 [2024-07-14 21:09:35.362219] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:24.032 [2024-07-14 21:09:35.516860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.032 [2024-07-14 21:09:35.516909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.032 [2024-07-14 21:09:35.516915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:24.032 21:09:35 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:24.032 21:09:35 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/69079 ]] 00:10:24.032 21:09:35 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:10:24.032 [2024-07-14 21:09:35.535494] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:10:24.032 [2024-07-14 21:09:35.535554] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:24.032 [2024-07-14 21:09:35.547822] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:10:24.032 [2024-07-14 21:09:35.547944] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:10:24.032 [2024-07-14 21:09:35.550188] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:24.032 [2024-07-14 21:09:35.550381] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:10:24.032 [2024-07-14 21:09:35.550461] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:10:24.032 [2024-07-14 21:09:35.552623] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:24.032 [2024-07-14 21:09:35.552812] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:10:24.032 [2024-07-14 21:09:35.552878] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:10:24.032 [2024-07-14 21:09:35.555039] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:24.032 [2024-07-14 21:09:35.555219] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:10:24.032 [2024-07-14 21:09:35.555292] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:10:24.032 [2024-07-14 21:09:35.555345] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:10:24.032 [2024-07-14 21:09:35.555562] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:10:25.409 21:09:36 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:25.409 done. 00:10:25.409 21:09:36 nvme -- common/autotest_common.sh@1076 -- # echo done. 00:10:25.409 21:09:36 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:25.409 21:09:36 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:10:25.409 21:09:36 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.409 21:09:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:25.409 ************************************ 00:10:25.409 START TEST nvme_reset 00:10:25.409 ************************************ 00:10:25.409 21:09:36 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:25.409 Initializing NVMe Controllers 00:10:25.409 Skipping QEMU NVMe SSD at 0000:00:10.0 00:10:25.409 Skipping QEMU NVMe SSD at 0000:00:11.0 00:10:25.409 Skipping QEMU NVMe SSD at 0000:00:13.0 00:10:25.409 Skipping QEMU NVMe SSD at 0000:00:12.0 00:10:25.409 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:10:25.409 00:10:25.409 real 0m0.292s 00:10:25.409 user 0m0.107s 00:10:25.409 sys 0m0.141s 00:10:25.409 21:09:36 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:25.409 21:09:36 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:10:25.409 ************************************ 00:10:25.409 END TEST nvme_reset 00:10:25.409 ************************************ 00:10:25.409 21:09:36 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:25.409 21:09:36 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:10:25.409 21:09:36 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:25.409 21:09:36 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.409 21:09:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:25.409 ************************************ 00:10:25.409 START TEST nvme_identify 00:10:25.409 ************************************ 00:10:25.409 21:09:36 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:10:25.409 21:09:36 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:10:25.409 21:09:36 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:10:25.409 21:09:36 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:10:25.409 21:09:36 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:10:25.409 21:09:36 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:10:25.409 21:09:36 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:10:25.409 21:09:36 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:25.409 21:09:36 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:25.409 21:09:36 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:10:25.409 21:09:36 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:10:25.409 21:09:36 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:25.409 21:09:36 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:10:25.670 [2024-07-14 21:09:37.194019] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 69112 terminated unexpected 00:10:25.670 ===================================================== 00:10:25.670 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:25.670 ===================================================== 00:10:25.670 Controller Capabilities/Features 00:10:25.670 ================================ 00:10:25.670 Vendor ID: 1b36 00:10:25.670 Subsystem Vendor ID: 1af4 00:10:25.670 Serial Number: 12340 00:10:25.670 Model Number: QEMU NVMe Ctrl 00:10:25.670 Firmware Version: 8.0.0 00:10:25.670 Recommended Arb Burst: 6 00:10:25.670 IEEE OUI Identifier: 00 54 52 00:10:25.670 Multi-path I/O 00:10:25.670 May have multiple subsystem ports: No 00:10:25.670 May have multiple controllers: No 00:10:25.670 Associated with SR-IOV VF: No 00:10:25.670 Max Data Transfer Size: 524288 00:10:25.670 Max Number of Namespaces: 256 00:10:25.670 Max Number of I/O Queues: 64 00:10:25.670 NVMe Specification Version (VS): 1.4 00:10:25.670 NVMe Specification Version (Identify): 1.4 00:10:25.670 Maximum Queue Entries: 2048 00:10:25.670 Contiguous Queues Required: Yes 00:10:25.670 Arbitration Mechanisms Supported 00:10:25.670 Weighted Round Robin: Not Supported 00:10:25.670 Vendor Specific: Not Supported 00:10:25.670 Reset Timeout: 7500 ms 00:10:25.670 Doorbell Stride: 4 bytes 00:10:25.670 NVM Subsystem Reset: Not Supported 00:10:25.670 Command Sets Supported 00:10:25.670 NVM Command Set: Supported 00:10:25.670 Boot Partition: Not Supported 00:10:25.670 Memory Page Size Minimum: 4096 bytes 00:10:25.670 Memory Page Size Maximum: 65536 bytes 00:10:25.670 Persistent Memory Region: Not Supported 00:10:25.670 Optional Asynchronous Events Supported 00:10:25.670 Namespace Attribute Notices: Supported 00:10:25.670 Firmware Activation Notices: Not Supported 00:10:25.670 ANA Change Notices: Not Supported 00:10:25.670 PLE Aggregate Log Change Notices: Not Supported 00:10:25.670 LBA Status Info Alert Notices: Not Supported 00:10:25.670 EGE Aggregate Log Change Notices: Not Supported 00:10:25.670 Normal NVM Subsystem Shutdown event: Not Supported 00:10:25.670 Zone Descriptor Change Notices: Not Supported 00:10:25.670 Discovery Log Change Notices: Not Supported 00:10:25.670 Controller Attributes 00:10:25.670 128-bit Host Identifier: Not Supported 00:10:25.670 Non-Operational Permissive Mode: Not Supported 00:10:25.670 NVM Sets: Not Supported 00:10:25.670 Read Recovery Levels: Not Supported 00:10:25.670 Endurance Groups: Not Supported 00:10:25.670 Predictable Latency Mode: Not Supported 00:10:25.670 Traffic Based Keep ALive: Not Supported 00:10:25.670 Namespace Granularity: Not Supported 00:10:25.670 SQ Associations: Not Supported 00:10:25.670 UUID List: Not Supported 00:10:25.670 Multi-Domain Subsystem: Not Supported 00:10:25.670 Fixed Capacity Management: Not Supported 00:10:25.670 Variable Capacity Management: Not Supported 00:10:25.670 Delete Endurance Group: Not Supported 00:10:25.670 Delete NVM Set: Not Supported 00:10:25.670 Extended LBA Formats Supported: Supported 00:10:25.670 Flexible Data Placement Supported: Not Supported 00:10:25.670 00:10:25.670 Controller Memory Buffer Support 00:10:25.670 ================================ 00:10:25.670 Supported: No 00:10:25.670 00:10:25.670 Persistent Memory Region Support 00:10:25.670 ================================ 00:10:25.671 Supported: No 00:10:25.671 00:10:25.671 Admin Command Set Attributes 00:10:25.671 ============================ 00:10:25.671 Security Send/Receive: Not Supported 00:10:25.671 Format NVM: Supported 00:10:25.671 Firmware Activate/Download: Not Supported 00:10:25.671 Namespace Management: Supported 00:10:25.671 Device Self-Test: Not Supported 00:10:25.671 Directives: Supported 00:10:25.671 NVMe-MI: Not Supported 00:10:25.671 Virtualization Management: Not Supported 00:10:25.671 Doorbell Buffer Config: Supported 00:10:25.671 Get LBA Status Capability: Not Supported 00:10:25.671 Command & Feature Lockdown Capability: Not Supported 00:10:25.671 Abort Command Limit: 4 00:10:25.671 Async Event Request Limit: 4 00:10:25.671 Number of Firmware Slots: N/A 00:10:25.671 Firmware Slot 1 Read-Only: N/A 00:10:25.671 Firmware Activation Without Reset: N/A 00:10:25.671 Multiple Update Detection Support: N/A 00:10:25.671 Firmware Update Granularity: No Information Provided 00:10:25.671 Per-Namespace SMART Log: Yes 00:10:25.671 Asymmetric Namespace Access Log Page: Not Supported 00:10:25.671 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:25.671 Command Effects Log Page: Supported 00:10:25.671 Get Log Page Extended Data: Supported 00:10:25.671 Telemetry Log Pages: Not Supported 00:10:25.671 Persistent Event Log Pages: Not Supported 00:10:25.671 Supported Log Pages Log Page: May Support 00:10:25.671 Commands Supported & Effects Log Page: Not Supported 00:10:25.671 Feature Identifiers & Effects Log Page:May Support 00:10:25.671 NVMe-MI Commands & Effects Log Page: May Support 00:10:25.671 Data Area 4 for Telemetry Log: Not Supported 00:10:25.671 Error Log Page Entries Supported: 1 00:10:25.671 Keep Alive: Not Supported 00:10:25.671 00:10:25.671 NVM Command Set Attributes 00:10:25.671 ========================== 00:10:25.671 Submission Queue Entry Size 00:10:25.671 Max: 64 00:10:25.671 Min: 64 00:10:25.671 Completion Queue Entry Size 00:10:25.671 Max: 16 00:10:25.671 Min: 16 00:10:25.671 Number of Namespaces: 256 00:10:25.671 Compare Command: Supported 00:10:25.671 Write Uncorrectable Command: Not Supported 00:10:25.671 Dataset Management Command: Supported 00:10:25.671 Write Zeroes Command: Supported 00:10:25.671 Set Features Save Field: Supported 00:10:25.671 Reservations: Not Supported 00:10:25.671 Timestamp: Supported 00:10:25.671 Copy: Supported 00:10:25.671 Volatile Write Cache: Present 00:10:25.671 Atomic Write Unit (Normal): 1 00:10:25.671 Atomic Write Unit (PFail): 1 00:10:25.671 Atomic Compare & Write Unit: 1 00:10:25.671 Fused Compare & Write: Not Supported 00:10:25.671 Scatter-Gather List 00:10:25.671 SGL Command Set: Supported 00:10:25.671 SGL Keyed: Not Supported 00:10:25.671 SGL Bit Bucket Descriptor: Not Supported 00:10:25.671 SGL Metadata Pointer: Not Supported 00:10:25.671 Oversized SGL: Not Supported 00:10:25.671 SGL Metadata Address: Not Supported 00:10:25.671 SGL Offset: Not Supported 00:10:25.671 Transport SGL Data Block: Not Supported 00:10:25.671 Replay Protected Memory Block: Not Supported 00:10:25.671 00:10:25.671 Firmware Slot Information 00:10:25.671 ========================= 00:10:25.671 Active slot: 1 00:10:25.671 Slot 1 Firmware Revision: 1.0 00:10:25.671 00:10:25.671 00:10:25.671 Commands Supported and Effects 00:10:25.671 ============================== 00:10:25.671 Admin Commands 00:10:25.671 -------------- 00:10:25.671 Delete I/O Submission Queue (00h): Supported 00:10:25.671 Create I/O Submission Queue (01h): Supported 00:10:25.671 Get Log Page (02h): Supported 00:10:25.671 Delete I/O Completion Queue (04h): Supported 00:10:25.671 Create I/O Completion Queue (05h): Supported 00:10:25.671 Identify (06h): Supported 00:10:25.671 Abort (08h): Supported 00:10:25.671 Set Features (09h): Supported 00:10:25.671 Get Features (0Ah): Supported 00:10:25.671 Asynchronous Event Request (0Ch): Supported 00:10:25.671 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:25.671 Directive Send (19h): Supported 00:10:25.671 Directive Receive (1Ah): Supported 00:10:25.671 Virtualization Management (1Ch): Supported 00:10:25.671 Doorbell Buffer Config (7Ch): Supported 00:10:25.671 Format NVM (80h): Supported LBA-Change 00:10:25.671 I/O Commands 00:10:25.671 ------------ 00:10:25.671 Flush (00h): Supported LBA-Change 00:10:25.671 Write (01h): Supported LBA-Change 00:10:25.671 Read (02h): Supported 00:10:25.671 Compare (05h): Supported 00:10:25.671 Write Zeroes (08h): Supported LBA-Change 00:10:25.671 Dataset Management (09h): Supported LBA-Change 00:10:25.671 Unknown (0Ch): Supported 00:10:25.671 Unknown (12h): Supported 00:10:25.671 Copy (19h): Supported LBA-Change 00:10:25.671 Unknown (1Dh): Supported LBA-Change 00:10:25.671 00:10:25.671 Error Log 00:10:25.671 ========= 00:10:25.671 00:10:25.671 Arbitration 00:10:25.671 =========== 00:10:25.671 Arbitration Burst: no limit 00:10:25.671 00:10:25.671 Power Management 00:10:25.671 ================ 00:10:25.671 Number of Power States: 1 00:10:25.671 Current Power State: Power State #0 00:10:25.671 Power State #0: 00:10:25.671 Max Power: 25.00 W 00:10:25.671 Non-Operational State: Operational 00:10:25.671 Entry Latency: 16 microseconds 00:10:25.671 Exit Latency: 4 microseconds 00:10:25.671 Relative Read Throughput: 0 00:10:25.671 Relative Read Latency: 0 00:10:25.671 Relative Write Throughput: 0 00:10:25.671 Relative Write Latency: 0 00:10:25.671 Idle Power[2024-07-14 21:09:37.195621] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 69112 terminated unexpected 00:10:25.671 : Not Reported 00:10:25.671 Active Power: Not Reported 00:10:25.671 Non-Operational Permissive Mode: Not Supported 00:10:25.671 00:10:25.671 Health Information 00:10:25.671 ================== 00:10:25.671 Critical Warnings: 00:10:25.671 Available Spare Space: OK 00:10:25.671 Temperature: OK 00:10:25.671 Device Reliability: OK 00:10:25.671 Read Only: No 00:10:25.671 Volatile Memory Backup: OK 00:10:25.671 Current Temperature: 323 Kelvin (50 Celsius) 00:10:25.671 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:25.671 Available Spare: 0% 00:10:25.671 Available Spare Threshold: 0% 00:10:25.671 Life Percentage Used: 0% 00:10:25.671 Data Units Read: 1035 00:10:25.671 Data Units Written: 862 00:10:25.671 Host Read Commands: 49347 00:10:25.671 Host Write Commands: 47787 00:10:25.671 Controller Busy Time: 0 minutes 00:10:25.671 Power Cycles: 0 00:10:25.671 Power On Hours: 0 hours 00:10:25.671 Unsafe Shutdowns: 0 00:10:25.671 Unrecoverable Media Errors: 0 00:10:25.671 Lifetime Error Log Entries: 0 00:10:25.671 Warning Temperature Time: 0 minutes 00:10:25.671 Critical Temperature Time: 0 minutes 00:10:25.671 00:10:25.671 Number of Queues 00:10:25.671 ================ 00:10:25.671 Number of I/O Submission Queues: 64 00:10:25.671 Number of I/O Completion Queues: 64 00:10:25.671 00:10:25.671 ZNS Specific Controller Data 00:10:25.671 ============================ 00:10:25.671 Zone Append Size Limit: 0 00:10:25.671 00:10:25.671 00:10:25.671 Active Namespaces 00:10:25.671 ================= 00:10:25.671 Namespace ID:1 00:10:25.671 Error Recovery Timeout: Unlimited 00:10:25.671 Command Set Identifier: NVM (00h) 00:10:25.671 Deallocate: Supported 00:10:25.671 Deallocated/Unwritten Error: Supported 00:10:25.671 Deallocated Read Value: All 0x00 00:10:25.671 Deallocate in Write Zeroes: Not Supported 00:10:25.671 Deallocated Guard Field: 0xFFFF 00:10:25.671 Flush: Supported 00:10:25.671 Reservation: Not Supported 00:10:25.671 Metadata Transferred as: Separate Metadata Buffer 00:10:25.671 Namespace Sharing Capabilities: Private 00:10:25.671 Size (in LBAs): 1548666 (5GiB) 00:10:25.671 Capacity (in LBAs): 1548666 (5GiB) 00:10:25.671 Utilization (in LBAs): 1548666 (5GiB) 00:10:25.671 Thin Provisioning: Not Supported 00:10:25.671 Per-NS Atomic Units: No 00:10:25.671 Maximum Single Source Range Length: 128 00:10:25.671 Maximum Copy Length: 128 00:10:25.671 Maximum Source Range Count: 128 00:10:25.671 NGUID/EUI64 Never Reused: No 00:10:25.671 Namespace Write Protected: No 00:10:25.671 Number of LBA Formats: 8 00:10:25.671 Current LBA Format: LBA Format #07 00:10:25.671 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:25.671 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:25.671 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:25.671 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:25.671 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:25.671 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:25.671 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:25.671 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:25.671 00:10:25.671 NVM Specific Namespace Data 00:10:25.671 =========================== 00:10:25.671 Logical Block Storage Tag Mask: 0 00:10:25.671 Protection Information Capabilities: 00:10:25.671 16b Guard Protection Information Storage Tag Support: No 00:10:25.671 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:25.671 Storage Tag Check Read Support: No 00:10:25.671 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.671 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.672 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.672 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.672 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.672 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.672 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.672 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.672 ===================================================== 00:10:25.672 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:25.672 ===================================================== 00:10:25.672 Controller Capabilities/Features 00:10:25.672 ================================ 00:10:25.672 Vendor ID: 1b36 00:10:25.672 Subsystem Vendor ID: 1af4 00:10:25.672 Serial Number: 12341 00:10:25.672 Model Number: QEMU NVMe Ctrl 00:10:25.672 Firmware Version: 8.0.0 00:10:25.672 Recommended Arb Burst: 6 00:10:25.672 IEEE OUI Identifier: 00 54 52 00:10:25.672 Multi-path I/O 00:10:25.672 May have multiple subsystem ports: No 00:10:25.672 May have multiple controllers: No 00:10:25.672 Associated with SR-IOV VF: No 00:10:25.672 Max Data Transfer Size: 524288 00:10:25.672 Max Number of Namespaces: 256 00:10:25.672 Max Number of I/O Queues: 64 00:10:25.672 NVMe Specification Version (VS): 1.4 00:10:25.672 NVMe Specification Version (Identify): 1.4 00:10:25.672 Maximum Queue Entries: 2048 00:10:25.672 Contiguous Queues Required: Yes 00:10:25.672 Arbitration Mechanisms Supported 00:10:25.672 Weighted Round Robin: Not Supported 00:10:25.672 Vendor Specific: Not Supported 00:10:25.672 Reset Timeout: 7500 ms 00:10:25.672 Doorbell Stride: 4 bytes 00:10:25.672 NVM Subsystem Reset: Not Supported 00:10:25.672 Command Sets Supported 00:10:25.672 NVM Command Set: Supported 00:10:25.672 Boot Partition: Not Supported 00:10:25.672 Memory Page Size Minimum: 4096 bytes 00:10:25.672 Memory Page Size Maximum: 65536 bytes 00:10:25.672 Persistent Memory Region: Not Supported 00:10:25.672 Optional Asynchronous Events Supported 00:10:25.672 Namespace Attribute Notices: Supported 00:10:25.672 Firmware Activation Notices: Not Supported 00:10:25.672 ANA Change Notices: Not Supported 00:10:25.672 PLE Aggregate Log Change Notices: Not Supported 00:10:25.672 LBA Status Info Alert Notices: Not Supported 00:10:25.672 EGE Aggregate Log Change Notices: Not Supported 00:10:25.672 Normal NVM Subsystem Shutdown event: Not Supported 00:10:25.672 Zone Descriptor Change Notices: Not Supported 00:10:25.672 Discovery Log Change Notices: Not Supported 00:10:25.672 Controller Attributes 00:10:25.672 128-bit Host Identifier: Not Supported 00:10:25.672 Non-Operational Permissive Mode: Not Supported 00:10:25.672 NVM Sets: Not Supported 00:10:25.672 Read Recovery Levels: Not Supported 00:10:25.672 Endurance Groups: Not Supported 00:10:25.672 Predictable Latency Mode: Not Supported 00:10:25.672 Traffic Based Keep ALive: Not Supported 00:10:25.672 Namespace Granularity: Not Supported 00:10:25.672 SQ Associations: Not Supported 00:10:25.672 UUID List: Not Supported 00:10:25.672 Multi-Domain Subsystem: Not Supported 00:10:25.672 Fixed Capacity Management: Not Supported 00:10:25.672 Variable Capacity Management: Not Supported 00:10:25.672 Delete Endurance Group: Not Supported 00:10:25.672 Delete NVM Set: Not Supported 00:10:25.672 Extended LBA Formats Supported: Supported 00:10:25.672 Flexible Data Placement Supported: Not Supported 00:10:25.672 00:10:25.672 Controller Memory Buffer Support 00:10:25.672 ================================ 00:10:25.672 Supported: No 00:10:25.672 00:10:25.672 Persistent Memory Region Support 00:10:25.672 ================================ 00:10:25.672 Supported: No 00:10:25.672 00:10:25.672 Admin Command Set Attributes 00:10:25.672 ============================ 00:10:25.672 Security Send/Receive: Not Supported 00:10:25.672 Format NVM: Supported 00:10:25.672 Firmware Activate/Download: Not Supported 00:10:25.672 Namespace Management: Supported 00:10:25.672 Device Self-Test: Not Supported 00:10:25.672 Directives: Supported 00:10:25.672 NVMe-MI: Not Supported 00:10:25.672 Virtualization Management: Not Supported 00:10:25.672 Doorbell Buffer Config: Supported 00:10:25.672 Get LBA Status Capability: Not Supported 00:10:25.672 Command & Feature Lockdown Capability: Not Supported 00:10:25.672 Abort Command Limit: 4 00:10:25.672 Async Event Request Limit: 4 00:10:25.672 Number of Firmware Slots: N/A 00:10:25.672 Firmware Slot 1 Read-Only: N/A 00:10:25.672 Firmware Activation Without Reset: N/A 00:10:25.672 Multiple Update Detection Support: N/A 00:10:25.672 Firmware Update Granularity: No Information Provided 00:10:25.672 Per-Namespace SMART Log: Yes 00:10:25.672 Asymmetric Namespace Access Log Page: Not Supported 00:10:25.672 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:25.672 Command Effects Log Page: Supported 00:10:25.672 Get Log Page Extended Data: Supported 00:10:25.672 Telemetry Log Pages: Not Supported 00:10:25.672 Persistent Event Log Pages: Not Supported 00:10:25.672 Supported Log Pages Log Page: May Support 00:10:25.672 Commands Supported & Effects Log Page: Not Supported 00:10:25.672 Feature Identifiers & Effects Log Page:May Support 00:10:25.672 NVMe-MI Commands & Effects Log Page: May Support 00:10:25.672 Data Area 4 for Telemetry Log: Not Supported 00:10:25.672 Error Log Page Entries Supported: 1 00:10:25.672 Keep Alive: Not Supported 00:10:25.672 00:10:25.672 NVM Command Set Attributes 00:10:25.672 ========================== 00:10:25.672 Submission Queue Entry Size 00:10:25.672 Max: 64 00:10:25.672 Min: 64 00:10:25.672 Completion Queue Entry Size 00:10:25.672 Max: 16 00:10:25.672 Min: 16 00:10:25.672 Number of Namespaces: 256 00:10:25.672 Compare Command: Supported 00:10:25.672 Write Uncorrectable Command: Not Supported 00:10:25.672 Dataset Management Command: Supported 00:10:25.672 Write Zeroes Command: Supported 00:10:25.672 Set Features Save Field: Supported 00:10:25.672 Reservations: Not Supported 00:10:25.672 Timestamp: Supported 00:10:25.672 Copy: Supported 00:10:25.672 Volatile Write Cache: Present 00:10:25.672 Atomic Write Unit (Normal): 1 00:10:25.672 Atomic Write Unit (PFail): 1 00:10:25.672 Atomic Compare & Write Unit: 1 00:10:25.672 Fused Compare & Write: Not Supported 00:10:25.672 Scatter-Gather List 00:10:25.672 SGL Command Set: Supported 00:10:25.672 SGL Keyed: Not Supported 00:10:25.672 SGL Bit Bucket Descriptor: Not Supported 00:10:25.672 SGL Metadata Pointer: Not Supported 00:10:25.672 Oversized SGL: Not Supported 00:10:25.672 SGL Metadata Address: Not Supported 00:10:25.672 SGL Offset: Not Supported 00:10:25.672 Transport SGL Data Block: Not Supported 00:10:25.672 Replay Protected Memory Block: Not Supported 00:10:25.672 00:10:25.672 Firmware Slot Information 00:10:25.672 ========================= 00:10:25.672 Active slot: 1 00:10:25.672 Slot 1 Firmware Revision: 1.0 00:10:25.672 00:10:25.672 00:10:25.672 Commands Supported and Effects 00:10:25.672 ============================== 00:10:25.672 Admin Commands 00:10:25.672 -------------- 00:10:25.672 Delete I/O Submission Queue (00h): Supported 00:10:25.672 Create I/O Submission Queue (01h): Supported 00:10:25.672 Get Log Page (02h): Supported 00:10:25.672 Delete I/O Completion Queue (04h): Supported 00:10:25.672 Create I/O Completion Queue (05h): Supported 00:10:25.672 Identify (06h): Supported 00:10:25.672 Abort (08h): Supported 00:10:25.672 Set Features (09h): Supported 00:10:25.672 Get Features (0Ah): Supported 00:10:25.672 Asynchronous Event Request (0Ch): Supported 00:10:25.672 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:25.672 Directive Send (19h): Supported 00:10:25.672 Directive Receive (1Ah): Supported 00:10:25.672 Virtualization Management (1Ch): Supported 00:10:25.672 Doorbell Buffer Config (7Ch): Supported 00:10:25.672 Format NVM (80h): Supported LBA-Change 00:10:25.672 I/O Commands 00:10:25.672 ------------ 00:10:25.672 Flush (00h): Supported LBA-Change 00:10:25.672 Write (01h): Supported LBA-Change 00:10:25.672 Read (02h): Supported 00:10:25.672 Compare (05h): Supported 00:10:25.672 Write Zeroes (08h): Supported LBA-Change 00:10:25.672 Dataset Management (09h): Supported LBA-Change 00:10:25.672 Unknown (0Ch): Supported 00:10:25.672 Unknown (12h): Supported 00:10:25.672 Copy (19h): Supported LBA-Change 00:10:25.672 Unknown (1Dh): Supported LBA-Change 00:10:25.672 00:10:25.672 Error Log 00:10:25.672 ========= 00:10:25.672 00:10:25.672 Arbitration 00:10:25.672 =========== 00:10:25.672 Arbitration Burst: no limit 00:10:25.672 00:10:25.672 Power Management 00:10:25.672 ================ 00:10:25.672 Number of Power States: 1 00:10:25.672 Current Power State: Power State #0 00:10:25.672 Power State #0: 00:10:25.672 Max Power: 25.00 W 00:10:25.672 Non-Operational State: Operational 00:10:25.672 Entry Latency: 16 microseconds 00:10:25.672 Exit Latency: 4 microseconds 00:10:25.672 Relative Read Throughput: 0 00:10:25.672 Relative Read Latency: 0 00:10:25.672 Relative Write Throughput: 0 00:10:25.672 Relative Write Latency: 0 00:10:25.672 Idle Power: Not Reported 00:10:25.672 Active Power: Not Reported 00:10:25.672 Non-Operational Permissive Mode: Not Supported 00:10:25.672 00:10:25.672 Health Information 00:10:25.672 ================== 00:10:25.673 Critical Warnings: 00:10:25.673 Available Spare Space: OK 00:10:25.673 Temperature: [2024-07-14 21:09:37.196959] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 69112 terminated unexpected 00:10:25.673 OK 00:10:25.673 Device Reliability: OK 00:10:25.673 Read Only: No 00:10:25.673 Volatile Memory Backup: OK 00:10:25.673 Current Temperature: 323 Kelvin (50 Celsius) 00:10:25.673 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:25.673 Available Spare: 0% 00:10:25.673 Available Spare Threshold: 0% 00:10:25.673 Life Percentage Used: 0% 00:10:25.673 Data Units Read: 742 00:10:25.673 Data Units Written: 589 00:10:25.673 Host Read Commands: 34870 00:10:25.673 Host Write Commands: 32548 00:10:25.673 Controller Busy Time: 0 minutes 00:10:25.673 Power Cycles: 0 00:10:25.673 Power On Hours: 0 hours 00:10:25.673 Unsafe Shutdowns: 0 00:10:25.673 Unrecoverable Media Errors: 0 00:10:25.673 Lifetime Error Log Entries: 0 00:10:25.673 Warning Temperature Time: 0 minutes 00:10:25.673 Critical Temperature Time: 0 minutes 00:10:25.673 00:10:25.673 Number of Queues 00:10:25.673 ================ 00:10:25.673 Number of I/O Submission Queues: 64 00:10:25.673 Number of I/O Completion Queues: 64 00:10:25.673 00:10:25.673 ZNS Specific Controller Data 00:10:25.673 ============================ 00:10:25.673 Zone Append Size Limit: 0 00:10:25.673 00:10:25.673 00:10:25.673 Active Namespaces 00:10:25.673 ================= 00:10:25.673 Namespace ID:1 00:10:25.673 Error Recovery Timeout: Unlimited 00:10:25.673 Command Set Identifier: NVM (00h) 00:10:25.673 Deallocate: Supported 00:10:25.673 Deallocated/Unwritten Error: Supported 00:10:25.673 Deallocated Read Value: All 0x00 00:10:25.673 Deallocate in Write Zeroes: Not Supported 00:10:25.673 Deallocated Guard Field: 0xFFFF 00:10:25.673 Flush: Supported 00:10:25.673 Reservation: Not Supported 00:10:25.673 Namespace Sharing Capabilities: Private 00:10:25.673 Size (in LBAs): 1310720 (5GiB) 00:10:25.673 Capacity (in LBAs): 1310720 (5GiB) 00:10:25.673 Utilization (in LBAs): 1310720 (5GiB) 00:10:25.673 Thin Provisioning: Not Supported 00:10:25.673 Per-NS Atomic Units: No 00:10:25.673 Maximum Single Source Range Length: 128 00:10:25.673 Maximum Copy Length: 128 00:10:25.673 Maximum Source Range Count: 128 00:10:25.673 NGUID/EUI64 Never Reused: No 00:10:25.673 Namespace Write Protected: No 00:10:25.673 Number of LBA Formats: 8 00:10:25.673 Current LBA Format: LBA Format #04 00:10:25.673 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:25.673 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:25.673 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:25.673 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:25.673 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:25.673 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:25.673 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:25.673 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:25.673 00:10:25.673 NVM Specific Namespace Data 00:10:25.673 =========================== 00:10:25.673 Logical Block Storage Tag Mask: 0 00:10:25.673 Protection Information Capabilities: 00:10:25.673 16b Guard Protection Information Storage Tag Support: No 00:10:25.673 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:25.673 Storage Tag Check Read Support: No 00:10:25.673 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.673 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.673 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.673 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.673 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.673 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.673 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.673 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.673 ===================================================== 00:10:25.673 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:25.673 ===================================================== 00:10:25.673 Controller Capabilities/Features 00:10:25.673 ================================ 00:10:25.673 Vendor ID: 1b36 00:10:25.673 Subsystem Vendor ID: 1af4 00:10:25.673 Serial Number: 12343 00:10:25.673 Model Number: QEMU NVMe Ctrl 00:10:25.673 Firmware Version: 8.0.0 00:10:25.673 Recommended Arb Burst: 6 00:10:25.673 IEEE OUI Identifier: 00 54 52 00:10:25.673 Multi-path I/O 00:10:25.673 May have multiple subsystem ports: No 00:10:25.673 May have multiple controllers: Yes 00:10:25.673 Associated with SR-IOV VF: No 00:10:25.673 Max Data Transfer Size: 524288 00:10:25.673 Max Number of Namespaces: 256 00:10:25.673 Max Number of I/O Queues: 64 00:10:25.673 NVMe Specification Version (VS): 1.4 00:10:25.673 NVMe Specification Version (Identify): 1.4 00:10:25.673 Maximum Queue Entries: 2048 00:10:25.673 Contiguous Queues Required: Yes 00:10:25.673 Arbitration Mechanisms Supported 00:10:25.673 Weighted Round Robin: Not Supported 00:10:25.673 Vendor Specific: Not Supported 00:10:25.673 Reset Timeout: 7500 ms 00:10:25.673 Doorbell Stride: 4 bytes 00:10:25.673 NVM Subsystem Reset: Not Supported 00:10:25.673 Command Sets Supported 00:10:25.673 NVM Command Set: Supported 00:10:25.673 Boot Partition: Not Supported 00:10:25.673 Memory Page Size Minimum: 4096 bytes 00:10:25.673 Memory Page Size Maximum: 65536 bytes 00:10:25.673 Persistent Memory Region: Not Supported 00:10:25.673 Optional Asynchronous Events Supported 00:10:25.673 Namespace Attribute Notices: Supported 00:10:25.673 Firmware Activation Notices: Not Supported 00:10:25.673 ANA Change Notices: Not Supported 00:10:25.673 PLE Aggregate Log Change Notices: Not Supported 00:10:25.673 LBA Status Info Alert Notices: Not Supported 00:10:25.673 EGE Aggregate Log Change Notices: Not Supported 00:10:25.673 Normal NVM Subsystem Shutdown event: Not Supported 00:10:25.673 Zone Descriptor Change Notices: Not Supported 00:10:25.673 Discovery Log Change Notices: Not Supported 00:10:25.673 Controller Attributes 00:10:25.673 128-bit Host Identifier: Not Supported 00:10:25.673 Non-Operational Permissive Mode: Not Supported 00:10:25.673 NVM Sets: Not Supported 00:10:25.673 Read Recovery Levels: Not Supported 00:10:25.673 Endurance Groups: Supported 00:10:25.673 Predictable Latency Mode: Not Supported 00:10:25.673 Traffic Based Keep ALive: Not Supported 00:10:25.673 Namespace Granularity: Not Supported 00:10:25.673 SQ Associations: Not Supported 00:10:25.673 UUID List: Not Supported 00:10:25.673 Multi-Domain Subsystem: Not Supported 00:10:25.673 Fixed Capacity Management: Not Supported 00:10:25.673 Variable Capacity Management: Not Supported 00:10:25.673 Delete Endurance Group: Not Supported 00:10:25.673 Delete NVM Set: Not Supported 00:10:25.673 Extended LBA Formats Supported: Supported 00:10:25.673 Flexible Data Placement Supported: Supported 00:10:25.673 00:10:25.673 Controller Memory Buffer Support 00:10:25.673 ================================ 00:10:25.673 Supported: No 00:10:25.673 00:10:25.673 Persistent Memory Region Support 00:10:25.673 ================================ 00:10:25.673 Supported: No 00:10:25.673 00:10:25.673 Admin Command Set Attributes 00:10:25.673 ============================ 00:10:25.673 Security Send/Receive: Not Supported 00:10:25.673 Format NVM: Supported 00:10:25.673 Firmware Activate/Download: Not Supported 00:10:25.673 Namespace Management: Supported 00:10:25.673 Device Self-Test: Not Supported 00:10:25.673 Directives: Supported 00:10:25.673 NVMe-MI: Not Supported 00:10:25.673 Virtualization Management: Not Supported 00:10:25.673 Doorbell Buffer Config: Supported 00:10:25.673 Get LBA Status Capability: Not Supported 00:10:25.673 Command & Feature Lockdown Capability: Not Supported 00:10:25.673 Abort Command Limit: 4 00:10:25.673 Async Event Request Limit: 4 00:10:25.673 Number of Firmware Slots: N/A 00:10:25.673 Firmware Slot 1 Read-Only: N/A 00:10:25.673 Firmware Activation Without Reset: N/A 00:10:25.673 Multiple Update Detection Support: N/A 00:10:25.673 Firmware Update Granularity: No Information Provided 00:10:25.673 Per-Namespace SMART Log: Yes 00:10:25.673 Asymmetric Namespace Access Log Page: Not Supported 00:10:25.673 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:25.673 Command Effects Log Page: Supported 00:10:25.673 Get Log Page Extended Data: Supported 00:10:25.673 Telemetry Log Pages: Not Supported 00:10:25.673 Persistent Event Log Pages: Not Supported 00:10:25.673 Supported Log Pages Log Page: May Support 00:10:25.673 Commands Supported & Effects Log Page: Not Supported 00:10:25.673 Feature Identifiers & Effects Log Page:May Support 00:10:25.673 NVMe-MI Commands & Effects Log Page: May Support 00:10:25.673 Data Area 4 for Telemetry Log: Not Supported 00:10:25.673 Error Log Page Entries Supported: 1 00:10:25.673 Keep Alive: Not Supported 00:10:25.673 00:10:25.673 NVM Command Set Attributes 00:10:25.673 ========================== 00:10:25.673 Submission Queue Entry Size 00:10:25.673 Max: 64 00:10:25.673 Min: 64 00:10:25.673 Completion Queue Entry Size 00:10:25.673 Max: 16 00:10:25.673 Min: 16 00:10:25.673 Number of Namespaces: 256 00:10:25.673 Compare Command: Supported 00:10:25.673 Write Uncorrectable Command: Not Supported 00:10:25.673 Dataset Management Command: Supported 00:10:25.673 Write Zeroes Command: Supported 00:10:25.673 Set Features Save Field: Supported 00:10:25.673 Reservations: Not Supported 00:10:25.674 Timestamp: Supported 00:10:25.674 Copy: Supported 00:10:25.674 Volatile Write Cache: Present 00:10:25.674 Atomic Write Unit (Normal): 1 00:10:25.674 Atomic Write Unit (PFail): 1 00:10:25.674 Atomic Compare & Write Unit: 1 00:10:25.674 Fused Compare & Write: Not Supported 00:10:25.674 Scatter-Gather List 00:10:25.674 SGL Command Set: Supported 00:10:25.674 SGL Keyed: Not Supported 00:10:25.674 SGL Bit Bucket Descriptor: Not Supported 00:10:25.674 SGL Metadata Pointer: Not Supported 00:10:25.674 Oversized SGL: Not Supported 00:10:25.674 SGL Metadata Address: Not Supported 00:10:25.674 SGL Offset: Not Supported 00:10:25.674 Transport SGL Data Block: Not Supported 00:10:25.674 Replay Protected Memory Block: Not Supported 00:10:25.674 00:10:25.674 Firmware Slot Information 00:10:25.674 ========================= 00:10:25.674 Active slot: 1 00:10:25.674 Slot 1 Firmware Revision: 1.0 00:10:25.674 00:10:25.674 00:10:25.674 Commands Supported and Effects 00:10:25.674 ============================== 00:10:25.674 Admin Commands 00:10:25.674 -------------- 00:10:25.674 Delete I/O Submission Queue (00h): Supported 00:10:25.674 Create I/O Submission Queue (01h): Supported 00:10:25.674 Get Log Page (02h): Supported 00:10:25.674 Delete I/O Completion Queue (04h): Supported 00:10:25.674 Create I/O Completion Queue (05h): Supported 00:10:25.674 Identify (06h): Supported 00:10:25.674 Abort (08h): Supported 00:10:25.674 Set Features (09h): Supported 00:10:25.674 Get Features (0Ah): Supported 00:10:25.674 Asynchronous Event Request (0Ch): Supported 00:10:25.674 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:25.674 Directive Send (19h): Supported 00:10:25.674 Directive Receive (1Ah): Supported 00:10:25.674 Virtualization Management (1Ch): Supported 00:10:25.674 Doorbell Buffer Config (7Ch): Supported 00:10:25.674 Format NVM (80h): Supported LBA-Change 00:10:25.674 I/O Commands 00:10:25.674 ------------ 00:10:25.674 Flush (00h): Supported LBA-Change 00:10:25.674 Write (01h): Supported LBA-Change 00:10:25.674 Read (02h): Supported 00:10:25.674 Compare (05h): Supported 00:10:25.674 Write Zeroes (08h): Supported LBA-Change 00:10:25.674 Dataset Management (09h): Supported LBA-Change 00:10:25.674 Unknown (0Ch): Supported 00:10:25.674 Unknown (12h): Supported 00:10:25.674 Copy (19h): Supported LBA-Change 00:10:25.674 Unknown (1Dh): Supported LBA-Change 00:10:25.674 00:10:25.674 Error Log 00:10:25.674 ========= 00:10:25.674 00:10:25.674 Arbitration 00:10:25.674 =========== 00:10:25.674 Arbitration Burst: no limit 00:10:25.674 00:10:25.674 Power Management 00:10:25.674 ================ 00:10:25.674 Number of Power States: 1 00:10:25.674 Current Power State: Power State #0 00:10:25.674 Power State #0: 00:10:25.674 Max Power: 25.00 W 00:10:25.674 Non-Operational State: Operational 00:10:25.674 Entry Latency: 16 microseconds 00:10:25.674 Exit Latency: 4 microseconds 00:10:25.674 Relative Read Throughput: 0 00:10:25.674 Relative Read Latency: 0 00:10:25.674 Relative Write Throughput: 0 00:10:25.674 Relative Write Latency: 0 00:10:25.674 Idle Power: Not Reported 00:10:25.674 Active Power: Not Reported 00:10:25.674 Non-Operational Permissive Mode: Not Supported 00:10:25.674 00:10:25.674 Health Information 00:10:25.674 ================== 00:10:25.674 Critical Warnings: 00:10:25.674 Available Spare Space: OK 00:10:25.674 Temperature: OK 00:10:25.674 Device Reliability: OK 00:10:25.674 Read Only: No 00:10:25.674 Volatile Memory Backup: OK 00:10:25.674 Current Temperature: 323 Kelvin (50 Celsius) 00:10:25.674 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:25.674 Available Spare: 0% 00:10:25.674 Available Spare Threshold: 0% 00:10:25.674 Life Percentage Used: 0% 00:10:25.674 Data Units Read: 823 00:10:25.674 Data Units Written: 716 00:10:25.674 Host Read Commands: 34899 00:10:25.674 Host Write Commands: 33489 00:10:25.674 Controller Busy Time: 0 minutes 00:10:25.674 Power Cycles: 0 00:10:25.674 Power On Hours: 0 hours 00:10:25.674 Unsafe Shutdowns: 0 00:10:25.674 Unrecoverable Media Errors: 0 00:10:25.674 Lifetime Error Log Entries: 0 00:10:25.674 Warning Temperature Time: 0 minutes 00:10:25.674 Critical Temperature Time: 0 minutes 00:10:25.674 00:10:25.674 Number of Queues 00:10:25.674 ================ 00:10:25.674 Number of I/O Submission Queues: 64 00:10:25.674 Number of I/O Completion Queues: 64 00:10:25.674 00:10:25.674 ZNS Specific Controller Data 00:10:25.674 ============================ 00:10:25.674 Zone Append Size Limit: 0 00:10:25.674 00:10:25.674 00:10:25.674 Active Namespaces 00:10:25.674 ================= 00:10:25.674 Namespace ID:1 00:10:25.674 Error Recovery Timeout: Unlimited 00:10:25.674 Command Set Identifier: NVM (00h) 00:10:25.674 Deallocate: Supported 00:10:25.674 Deallocated/Unwritten Error: Supported 00:10:25.674 Deallocated Read Value: All 0x00 00:10:25.674 Deallocate in Write Zeroes: Not Supported 00:10:25.674 Deallocated Guard Field: 0xFFFF 00:10:25.674 Flush: Supported 00:10:25.674 Reservation: Not Supported 00:10:25.674 Namespace Sharing Capabilities: Multiple Controllers 00:10:25.674 Size (in LBAs): 262144 (1GiB) 00:10:25.674 Capacity (in LBAs): 262144 (1GiB) 00:10:25.674 Utilization (in LBAs): 262144 (1GiB) 00:10:25.674 Thin Provisioning: Not Supported 00:10:25.674 Per-NS Atomic Units: No 00:10:25.674 Maximum Single Source Range Length: 128 00:10:25.674 Maximum Copy Length: 128 00:10:25.674 Maximum Source Range Count: 128 00:10:25.674 NGUID/EUI64 Never Reused: No 00:10:25.674 Namespace Write Protected: No 00:10:25.674 Endurance group ID: 1 00:10:25.674 Number of LBA Formats: 8 00:10:25.674 Current LBA Format: LBA Format #04 00:10:25.674 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:25.674 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:25.674 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:25.674 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:25.674 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:25.674 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:25.674 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:25.674 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:25.674 00:10:25.674 Get Feature FDP: 00:10:25.674 ================ 00:10:25.674 Enabled: Yes 00:10:25.674 FDP configuration index: 0 00:10:25.674 00:10:25.674 FDP configurations log page 00:10:25.674 =========================== 00:10:25.674 Number of FDP configurations: 1 00:10:25.674 Version: 0 00:10:25.674 Size: 112 00:10:25.674 FDP Configuration Descriptor: 0 00:10:25.674 Descriptor Size: 96 00:10:25.674 Reclaim Group Identifier format: 2 00:10:25.674 FDP Volatile Write Cache: Not Present 00:10:25.674 FDP Configuration: Valid 00:10:25.674 Vendor Specific Size: 0 00:10:25.674 Number of Reclaim Groups: 2 00:10:25.674 Number of Recalim Unit Handles: 8 00:10:25.674 Max Placement Identifiers: 128 00:10:25.674 Number of Namespaces Suppprted: 256 00:10:25.674 Reclaim unit Nominal Size: 6000000 bytes 00:10:25.674 Estimated Reclaim Unit Time Limit: Not Reported 00:10:25.674 RUH Desc #000: RUH Type: Initially Isolated 00:10:25.674 RUH Desc #001: RUH Type: Initially Isolated 00:10:25.674 RUH Desc #002: RUH Type: Initially Isolated 00:10:25.674 RUH Desc #003: RUH Type: Initially Isolated 00:10:25.674 RUH Desc #004: RUH Type: Initially Isolated 00:10:25.674 RUH Desc #005: RUH Type: Initially Isolated 00:10:25.674 RUH Desc #006: RUH Type: Initially Isolated 00:10:25.674 RUH Desc #007: RUH Type: Initially Isolated 00:10:25.674 00:10:25.674 FDP reclaim unit handle usage log page 00:10:25.674 ====================================== 00:10:25.674 Number of Reclaim Unit Handles: 8 00:10:25.674 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:25.674 RUH Usage Desc #001: RUH Attributes: Unused 00:10:25.674 RUH Usage Desc #002: RUH Attributes: Unused 00:10:25.674 RUH Usage Desc #003: RUH Attributes: Unused 00:10:25.674 RUH Usage Desc #004: RUH Attributes: Unused 00:10:25.674 RUH Usage Desc #005: RUH Attributes: Unused 00:10:25.674 RUH Usage Desc #006: RUH Attributes: Unused 00:10:25.674 RUH Usage Desc #007: RUH Attributes: Unused 00:10:25.674 00:10:25.674 FDP statistics log page 00:10:25.674 ======================= 00:10:25.674 Host bytes with metadata written: 446865408 00:10:25.674 Medi[2024-07-14 21:09:37.199052] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 69112 terminated unexpected 00:10:25.674 a bytes with metadata written: 446918656 00:10:25.674 Media bytes erased: 0 00:10:25.674 00:10:25.674 FDP events log page 00:10:25.674 =================== 00:10:25.674 Number of FDP events: 0 00:10:25.674 00:10:25.674 NVM Specific Namespace Data 00:10:25.674 =========================== 00:10:25.674 Logical Block Storage Tag Mask: 0 00:10:25.674 Protection Information Capabilities: 00:10:25.674 16b Guard Protection Information Storage Tag Support: No 00:10:25.674 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:25.674 Storage Tag Check Read Support: No 00:10:25.674 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.674 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.674 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.674 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.674 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.675 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.675 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.675 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.675 ===================================================== 00:10:25.675 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:25.675 ===================================================== 00:10:25.675 Controller Capabilities/Features 00:10:25.675 ================================ 00:10:25.675 Vendor ID: 1b36 00:10:25.675 Subsystem Vendor ID: 1af4 00:10:25.675 Serial Number: 12342 00:10:25.675 Model Number: QEMU NVMe Ctrl 00:10:25.675 Firmware Version: 8.0.0 00:10:25.675 Recommended Arb Burst: 6 00:10:25.675 IEEE OUI Identifier: 00 54 52 00:10:25.675 Multi-path I/O 00:10:25.675 May have multiple subsystem ports: No 00:10:25.675 May have multiple controllers: No 00:10:25.675 Associated with SR-IOV VF: No 00:10:25.675 Max Data Transfer Size: 524288 00:10:25.675 Max Number of Namespaces: 256 00:10:25.675 Max Number of I/O Queues: 64 00:10:25.675 NVMe Specification Version (VS): 1.4 00:10:25.675 NVMe Specification Version (Identify): 1.4 00:10:25.675 Maximum Queue Entries: 2048 00:10:25.675 Contiguous Queues Required: Yes 00:10:25.675 Arbitration Mechanisms Supported 00:10:25.675 Weighted Round Robin: Not Supported 00:10:25.675 Vendor Specific: Not Supported 00:10:25.675 Reset Timeout: 7500 ms 00:10:25.675 Doorbell Stride: 4 bytes 00:10:25.675 NVM Subsystem Reset: Not Supported 00:10:25.675 Command Sets Supported 00:10:25.675 NVM Command Set: Supported 00:10:25.675 Boot Partition: Not Supported 00:10:25.675 Memory Page Size Minimum: 4096 bytes 00:10:25.675 Memory Page Size Maximum: 65536 bytes 00:10:25.675 Persistent Memory Region: Not Supported 00:10:25.675 Optional Asynchronous Events Supported 00:10:25.675 Namespace Attribute Notices: Supported 00:10:25.675 Firmware Activation Notices: Not Supported 00:10:25.675 ANA Change Notices: Not Supported 00:10:25.675 PLE Aggregate Log Change Notices: Not Supported 00:10:25.675 LBA Status Info Alert Notices: Not Supported 00:10:25.675 EGE Aggregate Log Change Notices: Not Supported 00:10:25.675 Normal NVM Subsystem Shutdown event: Not Supported 00:10:25.675 Zone Descriptor Change Notices: Not Supported 00:10:25.675 Discovery Log Change Notices: Not Supported 00:10:25.675 Controller Attributes 00:10:25.675 128-bit Host Identifier: Not Supported 00:10:25.675 Non-Operational Permissive Mode: Not Supported 00:10:25.675 NVM Sets: Not Supported 00:10:25.675 Read Recovery Levels: Not Supported 00:10:25.675 Endurance Groups: Not Supported 00:10:25.675 Predictable Latency Mode: Not Supported 00:10:25.675 Traffic Based Keep ALive: Not Supported 00:10:25.675 Namespace Granularity: Not Supported 00:10:25.675 SQ Associations: Not Supported 00:10:25.675 UUID List: Not Supported 00:10:25.675 Multi-Domain Subsystem: Not Supported 00:10:25.675 Fixed Capacity Management: Not Supported 00:10:25.675 Variable Capacity Management: Not Supported 00:10:25.675 Delete Endurance Group: Not Supported 00:10:25.675 Delete NVM Set: Not Supported 00:10:25.675 Extended LBA Formats Supported: Supported 00:10:25.675 Flexible Data Placement Supported: Not Supported 00:10:25.675 00:10:25.675 Controller Memory Buffer Support 00:10:25.675 ================================ 00:10:25.675 Supported: No 00:10:25.675 00:10:25.675 Persistent Memory Region Support 00:10:25.675 ================================ 00:10:25.675 Supported: No 00:10:25.675 00:10:25.675 Admin Command Set Attributes 00:10:25.675 ============================ 00:10:25.675 Security Send/Receive: Not Supported 00:10:25.675 Format NVM: Supported 00:10:25.675 Firmware Activate/Download: Not Supported 00:10:25.675 Namespace Management: Supported 00:10:25.675 Device Self-Test: Not Supported 00:10:25.675 Directives: Supported 00:10:25.675 NVMe-MI: Not Supported 00:10:25.675 Virtualization Management: Not Supported 00:10:25.675 Doorbell Buffer Config: Supported 00:10:25.675 Get LBA Status Capability: Not Supported 00:10:25.675 Command & Feature Lockdown Capability: Not Supported 00:10:25.675 Abort Command Limit: 4 00:10:25.675 Async Event Request Limit: 4 00:10:25.675 Number of Firmware Slots: N/A 00:10:25.675 Firmware Slot 1 Read-Only: N/A 00:10:25.675 Firmware Activation Without Reset: N/A 00:10:25.675 Multiple Update Detection Support: N/A 00:10:25.675 Firmware Update Granularity: No Information Provided 00:10:25.675 Per-Namespace SMART Log: Yes 00:10:25.675 Asymmetric Namespace Access Log Page: Not Supported 00:10:25.675 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:25.675 Command Effects Log Page: Supported 00:10:25.675 Get Log Page Extended Data: Supported 00:10:25.675 Telemetry Log Pages: Not Supported 00:10:25.675 Persistent Event Log Pages: Not Supported 00:10:25.675 Supported Log Pages Log Page: May Support 00:10:25.675 Commands Supported & Effects Log Page: Not Supported 00:10:25.675 Feature Identifiers & Effects Log Page:May Support 00:10:25.675 NVMe-MI Commands & Effects Log Page: May Support 00:10:25.675 Data Area 4 for Telemetry Log: Not Supported 00:10:25.675 Error Log Page Entries Supported: 1 00:10:25.675 Keep Alive: Not Supported 00:10:25.675 00:10:25.675 NVM Command Set Attributes 00:10:25.675 ========================== 00:10:25.675 Submission Queue Entry Size 00:10:25.675 Max: 64 00:10:25.675 Min: 64 00:10:25.675 Completion Queue Entry Size 00:10:25.675 Max: 16 00:10:25.675 Min: 16 00:10:25.675 Number of Namespaces: 256 00:10:25.675 Compare Command: Supported 00:10:25.675 Write Uncorrectable Command: Not Supported 00:10:25.675 Dataset Management Command: Supported 00:10:25.675 Write Zeroes Command: Supported 00:10:25.675 Set Features Save Field: Supported 00:10:25.675 Reservations: Not Supported 00:10:25.675 Timestamp: Supported 00:10:25.675 Copy: Supported 00:10:25.675 Volatile Write Cache: Present 00:10:25.675 Atomic Write Unit (Normal): 1 00:10:25.675 Atomic Write Unit (PFail): 1 00:10:25.675 Atomic Compare & Write Unit: 1 00:10:25.675 Fused Compare & Write: Not Supported 00:10:25.675 Scatter-Gather List 00:10:25.675 SGL Command Set: Supported 00:10:25.675 SGL Keyed: Not Supported 00:10:25.675 SGL Bit Bucket Descriptor: Not Supported 00:10:25.675 SGL Metadata Pointer: Not Supported 00:10:25.675 Oversized SGL: Not Supported 00:10:25.675 SGL Metadata Address: Not Supported 00:10:25.675 SGL Offset: Not Supported 00:10:25.675 Transport SGL Data Block: Not Supported 00:10:25.675 Replay Protected Memory Block: Not Supported 00:10:25.675 00:10:25.675 Firmware Slot Information 00:10:25.675 ========================= 00:10:25.675 Active slot: 1 00:10:25.675 Slot 1 Firmware Revision: 1.0 00:10:25.675 00:10:25.675 00:10:25.675 Commands Supported and Effects 00:10:25.675 ============================== 00:10:25.675 Admin Commands 00:10:25.675 -------------- 00:10:25.675 Delete I/O Submission Queue (00h): Supported 00:10:25.675 Create I/O Submission Queue (01h): Supported 00:10:25.675 Get Log Page (02h): Supported 00:10:25.675 Delete I/O Completion Queue (04h): Supported 00:10:25.675 Create I/O Completion Queue (05h): Supported 00:10:25.675 Identify (06h): Supported 00:10:25.675 Abort (08h): Supported 00:10:25.675 Set Features (09h): Supported 00:10:25.675 Get Features (0Ah): Supported 00:10:25.675 Asynchronous Event Request (0Ch): Supported 00:10:25.675 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:25.675 Directive Send (19h): Supported 00:10:25.676 Directive Receive (1Ah): Supported 00:10:25.676 Virtualization Management (1Ch): Supported 00:10:25.676 Doorbell Buffer Config (7Ch): Supported 00:10:25.676 Format NVM (80h): Supported LBA-Change 00:10:25.676 I/O Commands 00:10:25.676 ------------ 00:10:25.676 Flush (00h): Supported LBA-Change 00:10:25.676 Write (01h): Supported LBA-Change 00:10:25.676 Read (02h): Supported 00:10:25.676 Compare (05h): Supported 00:10:25.676 Write Zeroes (08h): Supported LBA-Change 00:10:25.676 Dataset Management (09h): Supported LBA-Change 00:10:25.676 Unknown (0Ch): Supported 00:10:25.676 Unknown (12h): Supported 00:10:25.676 Copy (19h): Supported LBA-Change 00:10:25.676 Unknown (1Dh): Supported LBA-Change 00:10:25.676 00:10:25.676 Error Log 00:10:25.676 ========= 00:10:25.676 00:10:25.676 Arbitration 00:10:25.676 =========== 00:10:25.676 Arbitration Burst: no limit 00:10:25.676 00:10:25.676 Power Management 00:10:25.676 ================ 00:10:25.676 Number of Power States: 1 00:10:25.676 Current Power State: Power State #0 00:10:25.676 Power State #0: 00:10:25.676 Max Power: 25.00 W 00:10:25.676 Non-Operational State: Operational 00:10:25.676 Entry Latency: 16 microseconds 00:10:25.676 Exit Latency: 4 microseconds 00:10:25.676 Relative Read Throughput: 0 00:10:25.676 Relative Read Latency: 0 00:10:25.676 Relative Write Throughput: 0 00:10:25.676 Relative Write Latency: 0 00:10:25.676 Idle Power: Not Reported 00:10:25.676 Active Power: Not Reported 00:10:25.676 Non-Operational Permissive Mode: Not Supported 00:10:25.676 00:10:25.676 Health Information 00:10:25.676 ================== 00:10:25.676 Critical Warnings: 00:10:25.676 Available Spare Space: OK 00:10:25.676 Temperature: OK 00:10:25.676 Device Reliability: OK 00:10:25.676 Read Only: No 00:10:25.676 Volatile Memory Backup: OK 00:10:25.676 Current Temperature: 323 Kelvin (50 Celsius) 00:10:25.676 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:25.676 Available Spare: 0% 00:10:25.676 Available Spare Threshold: 0% 00:10:25.676 Life Percentage Used: 0% 00:10:25.676 Data Units Read: 2230 00:10:25.676 Data Units Written: 1910 00:10:25.676 Host Read Commands: 102723 00:10:25.676 Host Write Commands: 98493 00:10:25.676 Controller Busy Time: 0 minutes 00:10:25.676 Power Cycles: 0 00:10:25.676 Power On Hours: 0 hours 00:10:25.676 Unsafe Shutdowns: 0 00:10:25.676 Unrecoverable Media Errors: 0 00:10:25.676 Lifetime Error Log Entries: 0 00:10:25.676 Warning Temperature Time: 0 minutes 00:10:25.676 Critical Temperature Time: 0 minutes 00:10:25.676 00:10:25.676 Number of Queues 00:10:25.676 ================ 00:10:25.676 Number of I/O Submission Queues: 64 00:10:25.676 Number of I/O Completion Queues: 64 00:10:25.676 00:10:25.676 ZNS Specific Controller Data 00:10:25.676 ============================ 00:10:25.676 Zone Append Size Limit: 0 00:10:25.676 00:10:25.676 00:10:25.676 Active Namespaces 00:10:25.676 ================= 00:10:25.676 Namespace ID:1 00:10:25.676 Error Recovery Timeout: Unlimited 00:10:25.676 Command Set Identifier: NVM (00h) 00:10:25.676 Deallocate: Supported 00:10:25.676 Deallocated/Unwritten Error: Supported 00:10:25.676 Deallocated Read Value: All 0x00 00:10:25.676 Deallocate in Write Zeroes: Not Supported 00:10:25.676 Deallocated Guard Field: 0xFFFF 00:10:25.676 Flush: Supported 00:10:25.676 Reservation: Not Supported 00:10:25.676 Namespace Sharing Capabilities: Private 00:10:25.676 Size (in LBAs): 1048576 (4GiB) 00:10:25.676 Capacity (in LBAs): 1048576 (4GiB) 00:10:25.676 Utilization (in LBAs): 1048576 (4GiB) 00:10:25.676 Thin Provisioning: Not Supported 00:10:25.676 Per-NS Atomic Units: No 00:10:25.676 Maximum Single Source Range Length: 128 00:10:25.676 Maximum Copy Length: 128 00:10:25.676 Maximum Source Range Count: 128 00:10:25.676 NGUID/EUI64 Never Reused: No 00:10:25.676 Namespace Write Protected: No 00:10:25.676 Number of LBA Formats: 8 00:10:25.676 Current LBA Format: LBA Format #04 00:10:25.676 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:25.676 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:25.676 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:25.676 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:25.676 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:25.676 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:25.676 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:25.676 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:25.676 00:10:25.676 NVM Specific Namespace Data 00:10:25.676 =========================== 00:10:25.676 Logical Block Storage Tag Mask: 0 00:10:25.676 Protection Information Capabilities: 00:10:25.676 16b Guard Protection Information Storage Tag Support: No 00:10:25.676 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:25.676 Storage Tag Check Read Support: No 00:10:25.676 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.676 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.676 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.676 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.676 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.676 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.676 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.676 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.676 Namespace ID:2 00:10:25.676 Error Recovery Timeout: Unlimited 00:10:25.676 Command Set Identifier: NVM (00h) 00:10:25.676 Deallocate: Supported 00:10:25.676 Deallocated/Unwritten Error: Supported 00:10:25.676 Deallocated Read Value: All 0x00 00:10:25.676 Deallocate in Write Zeroes: Not Supported 00:10:25.676 Deallocated Guard Field: 0xFFFF 00:10:25.676 Flush: Supported 00:10:25.676 Reservation: Not Supported 00:10:25.676 Namespace Sharing Capabilities: Private 00:10:25.676 Size (in LBAs): 1048576 (4GiB) 00:10:25.676 Capacity (in LBAs): 1048576 (4GiB) 00:10:25.676 Utilization (in LBAs): 1048576 (4GiB) 00:10:25.676 Thin Provisioning: Not Supported 00:10:25.676 Per-NS Atomic Units: No 00:10:25.676 Maximum Single Source Range Length: 128 00:10:25.676 Maximum Copy Length: 128 00:10:25.676 Maximum Source Range Count: 128 00:10:25.676 NGUID/EUI64 Never Reused: No 00:10:25.676 Namespace Write Protected: No 00:10:25.676 Number of LBA Formats: 8 00:10:25.676 Current LBA Format: LBA Format #04 00:10:25.676 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:25.676 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:25.676 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:25.676 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:25.676 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:25.676 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:25.676 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:25.676 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:25.676 00:10:25.676 NVM Specific Namespace Data 00:10:25.676 =========================== 00:10:25.676 Logical Block Storage Tag Mask: 0 00:10:25.676 Protection Information Capabilities: 00:10:25.676 16b Guard Protection Information Storage Tag Support: No 00:10:25.676 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:25.676 Storage Tag Check Read Support: No 00:10:25.676 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.676 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.676 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.676 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.676 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.676 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.676 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.676 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.676 Namespace ID:3 00:10:25.676 Error Recovery Timeout: Unlimited 00:10:25.676 Command Set Identifier: NVM (00h) 00:10:25.676 Deallocate: Supported 00:10:25.676 Deallocated/Unwritten Error: Supported 00:10:25.676 Deallocated Read Value: All 0x00 00:10:25.676 Deallocate in Write Zeroes: Not Supported 00:10:25.676 Deallocated Guard Field: 0xFFFF 00:10:25.676 Flush: Supported 00:10:25.676 Reservation: Not Supported 00:10:25.676 Namespace Sharing Capabilities: Private 00:10:25.676 Size (in LBAs): 1048576 (4GiB) 00:10:25.935 Capacity (in LBAs): 1048576 (4GiB) 00:10:25.935 Utilization (in LBAs): 1048576 (4GiB) 00:10:25.935 Thin Provisioning: Not Supported 00:10:25.935 Per-NS Atomic Units: No 00:10:25.935 Maximum Single Source Range Length: 128 00:10:25.935 Maximum Copy Length: 128 00:10:25.935 Maximum Source Range Count: 128 00:10:25.935 NGUID/EUI64 Never Reused: No 00:10:25.935 Namespace Write Protected: No 00:10:25.935 Number of LBA Formats: 8 00:10:25.935 Current LBA Format: LBA Format #04 00:10:25.935 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:25.935 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:25.935 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:25.935 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:25.935 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:25.935 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:25.935 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:25.935 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:25.935 00:10:25.935 NVM Specific Namespace Data 00:10:25.935 =========================== 00:10:25.935 Logical Block Storage Tag Mask: 0 00:10:25.935 Protection Information Capabilities: 00:10:25.935 16b Guard Protection Information Storage Tag Support: No 00:10:25.935 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:25.935 Storage Tag Check Read Support: No 00:10:25.935 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.935 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.935 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.935 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.935 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.935 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.935 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.935 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:25.935 21:09:37 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:25.935 21:09:37 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:10:26.194 ===================================================== 00:10:26.194 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:26.194 ===================================================== 00:10:26.194 Controller Capabilities/Features 00:10:26.194 ================================ 00:10:26.194 Vendor ID: 1b36 00:10:26.194 Subsystem Vendor ID: 1af4 00:10:26.194 Serial Number: 12340 00:10:26.194 Model Number: QEMU NVMe Ctrl 00:10:26.195 Firmware Version: 8.0.0 00:10:26.195 Recommended Arb Burst: 6 00:10:26.195 IEEE OUI Identifier: 00 54 52 00:10:26.195 Multi-path I/O 00:10:26.195 May have multiple subsystem ports: No 00:10:26.195 May have multiple controllers: No 00:10:26.195 Associated with SR-IOV VF: No 00:10:26.195 Max Data Transfer Size: 524288 00:10:26.195 Max Number of Namespaces: 256 00:10:26.195 Max Number of I/O Queues: 64 00:10:26.195 NVMe Specification Version (VS): 1.4 00:10:26.195 NVMe Specification Version (Identify): 1.4 00:10:26.195 Maximum Queue Entries: 2048 00:10:26.195 Contiguous Queues Required: Yes 00:10:26.195 Arbitration Mechanisms Supported 00:10:26.195 Weighted Round Robin: Not Supported 00:10:26.195 Vendor Specific: Not Supported 00:10:26.195 Reset Timeout: 7500 ms 00:10:26.195 Doorbell Stride: 4 bytes 00:10:26.195 NVM Subsystem Reset: Not Supported 00:10:26.195 Command Sets Supported 00:10:26.195 NVM Command Set: Supported 00:10:26.195 Boot Partition: Not Supported 00:10:26.195 Memory Page Size Minimum: 4096 bytes 00:10:26.195 Memory Page Size Maximum: 65536 bytes 00:10:26.195 Persistent Memory Region: Not Supported 00:10:26.195 Optional Asynchronous Events Supported 00:10:26.195 Namespace Attribute Notices: Supported 00:10:26.195 Firmware Activation Notices: Not Supported 00:10:26.195 ANA Change Notices: Not Supported 00:10:26.195 PLE Aggregate Log Change Notices: Not Supported 00:10:26.195 LBA Status Info Alert Notices: Not Supported 00:10:26.195 EGE Aggregate Log Change Notices: Not Supported 00:10:26.195 Normal NVM Subsystem Shutdown event: Not Supported 00:10:26.195 Zone Descriptor Change Notices: Not Supported 00:10:26.195 Discovery Log Change Notices: Not Supported 00:10:26.195 Controller Attributes 00:10:26.195 128-bit Host Identifier: Not Supported 00:10:26.195 Non-Operational Permissive Mode: Not Supported 00:10:26.195 NVM Sets: Not Supported 00:10:26.195 Read Recovery Levels: Not Supported 00:10:26.195 Endurance Groups: Not Supported 00:10:26.195 Predictable Latency Mode: Not Supported 00:10:26.195 Traffic Based Keep ALive: Not Supported 00:10:26.195 Namespace Granularity: Not Supported 00:10:26.195 SQ Associations: Not Supported 00:10:26.195 UUID List: Not Supported 00:10:26.195 Multi-Domain Subsystem: Not Supported 00:10:26.195 Fixed Capacity Management: Not Supported 00:10:26.195 Variable Capacity Management: Not Supported 00:10:26.195 Delete Endurance Group: Not Supported 00:10:26.195 Delete NVM Set: Not Supported 00:10:26.195 Extended LBA Formats Supported: Supported 00:10:26.195 Flexible Data Placement Supported: Not Supported 00:10:26.195 00:10:26.195 Controller Memory Buffer Support 00:10:26.195 ================================ 00:10:26.195 Supported: No 00:10:26.195 00:10:26.195 Persistent Memory Region Support 00:10:26.195 ================================ 00:10:26.195 Supported: No 00:10:26.195 00:10:26.195 Admin Command Set Attributes 00:10:26.195 ============================ 00:10:26.195 Security Send/Receive: Not Supported 00:10:26.195 Format NVM: Supported 00:10:26.195 Firmware Activate/Download: Not Supported 00:10:26.195 Namespace Management: Supported 00:10:26.195 Device Self-Test: Not Supported 00:10:26.195 Directives: Supported 00:10:26.195 NVMe-MI: Not Supported 00:10:26.195 Virtualization Management: Not Supported 00:10:26.195 Doorbell Buffer Config: Supported 00:10:26.195 Get LBA Status Capability: Not Supported 00:10:26.195 Command & Feature Lockdown Capability: Not Supported 00:10:26.195 Abort Command Limit: 4 00:10:26.195 Async Event Request Limit: 4 00:10:26.195 Number of Firmware Slots: N/A 00:10:26.195 Firmware Slot 1 Read-Only: N/A 00:10:26.195 Firmware Activation Without Reset: N/A 00:10:26.195 Multiple Update Detection Support: N/A 00:10:26.195 Firmware Update Granularity: No Information Provided 00:10:26.195 Per-Namespace SMART Log: Yes 00:10:26.195 Asymmetric Namespace Access Log Page: Not Supported 00:10:26.195 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:26.195 Command Effects Log Page: Supported 00:10:26.195 Get Log Page Extended Data: Supported 00:10:26.195 Telemetry Log Pages: Not Supported 00:10:26.195 Persistent Event Log Pages: Not Supported 00:10:26.195 Supported Log Pages Log Page: May Support 00:10:26.195 Commands Supported & Effects Log Page: Not Supported 00:10:26.195 Feature Identifiers & Effects Log Page:May Support 00:10:26.195 NVMe-MI Commands & Effects Log Page: May Support 00:10:26.195 Data Area 4 for Telemetry Log: Not Supported 00:10:26.195 Error Log Page Entries Supported: 1 00:10:26.195 Keep Alive: Not Supported 00:10:26.195 00:10:26.195 NVM Command Set Attributes 00:10:26.195 ========================== 00:10:26.195 Submission Queue Entry Size 00:10:26.195 Max: 64 00:10:26.195 Min: 64 00:10:26.195 Completion Queue Entry Size 00:10:26.195 Max: 16 00:10:26.195 Min: 16 00:10:26.195 Number of Namespaces: 256 00:10:26.195 Compare Command: Supported 00:10:26.195 Write Uncorrectable Command: Not Supported 00:10:26.195 Dataset Management Command: Supported 00:10:26.195 Write Zeroes Command: Supported 00:10:26.195 Set Features Save Field: Supported 00:10:26.195 Reservations: Not Supported 00:10:26.195 Timestamp: Supported 00:10:26.195 Copy: Supported 00:10:26.195 Volatile Write Cache: Present 00:10:26.195 Atomic Write Unit (Normal): 1 00:10:26.195 Atomic Write Unit (PFail): 1 00:10:26.195 Atomic Compare & Write Unit: 1 00:10:26.195 Fused Compare & Write: Not Supported 00:10:26.195 Scatter-Gather List 00:10:26.195 SGL Command Set: Supported 00:10:26.195 SGL Keyed: Not Supported 00:10:26.195 SGL Bit Bucket Descriptor: Not Supported 00:10:26.195 SGL Metadata Pointer: Not Supported 00:10:26.195 Oversized SGL: Not Supported 00:10:26.195 SGL Metadata Address: Not Supported 00:10:26.195 SGL Offset: Not Supported 00:10:26.195 Transport SGL Data Block: Not Supported 00:10:26.195 Replay Protected Memory Block: Not Supported 00:10:26.195 00:10:26.195 Firmware Slot Information 00:10:26.195 ========================= 00:10:26.195 Active slot: 1 00:10:26.195 Slot 1 Firmware Revision: 1.0 00:10:26.195 00:10:26.195 00:10:26.195 Commands Supported and Effects 00:10:26.195 ============================== 00:10:26.195 Admin Commands 00:10:26.195 -------------- 00:10:26.195 Delete I/O Submission Queue (00h): Supported 00:10:26.195 Create I/O Submission Queue (01h): Supported 00:10:26.195 Get Log Page (02h): Supported 00:10:26.195 Delete I/O Completion Queue (04h): Supported 00:10:26.195 Create I/O Completion Queue (05h): Supported 00:10:26.195 Identify (06h): Supported 00:10:26.195 Abort (08h): Supported 00:10:26.195 Set Features (09h): Supported 00:10:26.195 Get Features (0Ah): Supported 00:10:26.195 Asynchronous Event Request (0Ch): Supported 00:10:26.195 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:26.195 Directive Send (19h): Supported 00:10:26.195 Directive Receive (1Ah): Supported 00:10:26.195 Virtualization Management (1Ch): Supported 00:10:26.195 Doorbell Buffer Config (7Ch): Supported 00:10:26.195 Format NVM (80h): Supported LBA-Change 00:10:26.195 I/O Commands 00:10:26.195 ------------ 00:10:26.195 Flush (00h): Supported LBA-Change 00:10:26.195 Write (01h): Supported LBA-Change 00:10:26.195 Read (02h): Supported 00:10:26.195 Compare (05h): Supported 00:10:26.195 Write Zeroes (08h): Supported LBA-Change 00:10:26.195 Dataset Management (09h): Supported LBA-Change 00:10:26.195 Unknown (0Ch): Supported 00:10:26.195 Unknown (12h): Supported 00:10:26.195 Copy (19h): Supported LBA-Change 00:10:26.195 Unknown (1Dh): Supported LBA-Change 00:10:26.195 00:10:26.195 Error Log 00:10:26.195 ========= 00:10:26.195 00:10:26.195 Arbitration 00:10:26.195 =========== 00:10:26.195 Arbitration Burst: no limit 00:10:26.195 00:10:26.195 Power Management 00:10:26.195 ================ 00:10:26.195 Number of Power States: 1 00:10:26.195 Current Power State: Power State #0 00:10:26.195 Power State #0: 00:10:26.195 Max Power: 25.00 W 00:10:26.195 Non-Operational State: Operational 00:10:26.195 Entry Latency: 16 microseconds 00:10:26.195 Exit Latency: 4 microseconds 00:10:26.195 Relative Read Throughput: 0 00:10:26.195 Relative Read Latency: 0 00:10:26.195 Relative Write Throughput: 0 00:10:26.195 Relative Write Latency: 0 00:10:26.195 Idle Power: Not Reported 00:10:26.195 Active Power: Not Reported 00:10:26.195 Non-Operational Permissive Mode: Not Supported 00:10:26.195 00:10:26.195 Health Information 00:10:26.195 ================== 00:10:26.195 Critical Warnings: 00:10:26.195 Available Spare Space: OK 00:10:26.195 Temperature: OK 00:10:26.195 Device Reliability: OK 00:10:26.195 Read Only: No 00:10:26.195 Volatile Memory Backup: OK 00:10:26.195 Current Temperature: 323 Kelvin (50 Celsius) 00:10:26.195 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:26.195 Available Spare: 0% 00:10:26.195 Available Spare Threshold: 0% 00:10:26.195 Life Percentage Used: 0% 00:10:26.195 Data Units Read: 1035 00:10:26.195 Data Units Written: 862 00:10:26.196 Host Read Commands: 49347 00:10:26.196 Host Write Commands: 47787 00:10:26.196 Controller Busy Time: 0 minutes 00:10:26.196 Power Cycles: 0 00:10:26.196 Power On Hours: 0 hours 00:10:26.196 Unsafe Shutdowns: 0 00:10:26.196 Unrecoverable Media Errors: 0 00:10:26.196 Lifetime Error Log Entries: 0 00:10:26.196 Warning Temperature Time: 0 minutes 00:10:26.196 Critical Temperature Time: 0 minutes 00:10:26.196 00:10:26.196 Number of Queues 00:10:26.196 ================ 00:10:26.196 Number of I/O Submission Queues: 64 00:10:26.196 Number of I/O Completion Queues: 64 00:10:26.196 00:10:26.196 ZNS Specific Controller Data 00:10:26.196 ============================ 00:10:26.196 Zone Append Size Limit: 0 00:10:26.196 00:10:26.196 00:10:26.196 Active Namespaces 00:10:26.196 ================= 00:10:26.196 Namespace ID:1 00:10:26.196 Error Recovery Timeout: Unlimited 00:10:26.196 Command Set Identifier: NVM (00h) 00:10:26.196 Deallocate: Supported 00:10:26.196 Deallocated/Unwritten Error: Supported 00:10:26.196 Deallocated Read Value: All 0x00 00:10:26.196 Deallocate in Write Zeroes: Not Supported 00:10:26.196 Deallocated Guard Field: 0xFFFF 00:10:26.196 Flush: Supported 00:10:26.196 Reservation: Not Supported 00:10:26.196 Metadata Transferred as: Separate Metadata Buffer 00:10:26.196 Namespace Sharing Capabilities: Private 00:10:26.196 Size (in LBAs): 1548666 (5GiB) 00:10:26.196 Capacity (in LBAs): 1548666 (5GiB) 00:10:26.196 Utilization (in LBAs): 1548666 (5GiB) 00:10:26.196 Thin Provisioning: Not Supported 00:10:26.196 Per-NS Atomic Units: No 00:10:26.196 Maximum Single Source Range Length: 128 00:10:26.196 Maximum Copy Length: 128 00:10:26.196 Maximum Source Range Count: 128 00:10:26.196 NGUID/EUI64 Never Reused: No 00:10:26.196 Namespace Write Protected: No 00:10:26.196 Number of LBA Formats: 8 00:10:26.196 Current LBA Format: LBA Format #07 00:10:26.196 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:26.196 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:26.196 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:26.196 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:26.196 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:26.196 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:26.196 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:26.196 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:26.196 00:10:26.196 NVM Specific Namespace Data 00:10:26.196 =========================== 00:10:26.196 Logical Block Storage Tag Mask: 0 00:10:26.196 Protection Information Capabilities: 00:10:26.196 16b Guard Protection Information Storage Tag Support: No 00:10:26.196 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:26.196 Storage Tag Check Read Support: No 00:10:26.196 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.196 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.196 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.196 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.196 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.196 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.196 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.196 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.196 21:09:37 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:26.196 21:09:37 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:10:26.455 ===================================================== 00:10:26.455 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:26.455 ===================================================== 00:10:26.455 Controller Capabilities/Features 00:10:26.455 ================================ 00:10:26.455 Vendor ID: 1b36 00:10:26.455 Subsystem Vendor ID: 1af4 00:10:26.455 Serial Number: 12341 00:10:26.455 Model Number: QEMU NVMe Ctrl 00:10:26.455 Firmware Version: 8.0.0 00:10:26.455 Recommended Arb Burst: 6 00:10:26.455 IEEE OUI Identifier: 00 54 52 00:10:26.455 Multi-path I/O 00:10:26.455 May have multiple subsystem ports: No 00:10:26.455 May have multiple controllers: No 00:10:26.455 Associated with SR-IOV VF: No 00:10:26.455 Max Data Transfer Size: 524288 00:10:26.456 Max Number of Namespaces: 256 00:10:26.456 Max Number of I/O Queues: 64 00:10:26.456 NVMe Specification Version (VS): 1.4 00:10:26.456 NVMe Specification Version (Identify): 1.4 00:10:26.456 Maximum Queue Entries: 2048 00:10:26.456 Contiguous Queues Required: Yes 00:10:26.456 Arbitration Mechanisms Supported 00:10:26.456 Weighted Round Robin: Not Supported 00:10:26.456 Vendor Specific: Not Supported 00:10:26.456 Reset Timeout: 7500 ms 00:10:26.456 Doorbell Stride: 4 bytes 00:10:26.456 NVM Subsystem Reset: Not Supported 00:10:26.456 Command Sets Supported 00:10:26.456 NVM Command Set: Supported 00:10:26.456 Boot Partition: Not Supported 00:10:26.456 Memory Page Size Minimum: 4096 bytes 00:10:26.456 Memory Page Size Maximum: 65536 bytes 00:10:26.456 Persistent Memory Region: Not Supported 00:10:26.456 Optional Asynchronous Events Supported 00:10:26.456 Namespace Attribute Notices: Supported 00:10:26.456 Firmware Activation Notices: Not Supported 00:10:26.456 ANA Change Notices: Not Supported 00:10:26.456 PLE Aggregate Log Change Notices: Not Supported 00:10:26.456 LBA Status Info Alert Notices: Not Supported 00:10:26.456 EGE Aggregate Log Change Notices: Not Supported 00:10:26.456 Normal NVM Subsystem Shutdown event: Not Supported 00:10:26.456 Zone Descriptor Change Notices: Not Supported 00:10:26.456 Discovery Log Change Notices: Not Supported 00:10:26.456 Controller Attributes 00:10:26.456 128-bit Host Identifier: Not Supported 00:10:26.456 Non-Operational Permissive Mode: Not Supported 00:10:26.456 NVM Sets: Not Supported 00:10:26.456 Read Recovery Levels: Not Supported 00:10:26.456 Endurance Groups: Not Supported 00:10:26.456 Predictable Latency Mode: Not Supported 00:10:26.456 Traffic Based Keep ALive: Not Supported 00:10:26.456 Namespace Granularity: Not Supported 00:10:26.456 SQ Associations: Not Supported 00:10:26.456 UUID List: Not Supported 00:10:26.456 Multi-Domain Subsystem: Not Supported 00:10:26.456 Fixed Capacity Management: Not Supported 00:10:26.456 Variable Capacity Management: Not Supported 00:10:26.456 Delete Endurance Group: Not Supported 00:10:26.456 Delete NVM Set: Not Supported 00:10:26.456 Extended LBA Formats Supported: Supported 00:10:26.456 Flexible Data Placement Supported: Not Supported 00:10:26.456 00:10:26.456 Controller Memory Buffer Support 00:10:26.456 ================================ 00:10:26.456 Supported: No 00:10:26.456 00:10:26.456 Persistent Memory Region Support 00:10:26.456 ================================ 00:10:26.456 Supported: No 00:10:26.456 00:10:26.456 Admin Command Set Attributes 00:10:26.456 ============================ 00:10:26.456 Security Send/Receive: Not Supported 00:10:26.456 Format NVM: Supported 00:10:26.456 Firmware Activate/Download: Not Supported 00:10:26.456 Namespace Management: Supported 00:10:26.456 Device Self-Test: Not Supported 00:10:26.456 Directives: Supported 00:10:26.456 NVMe-MI: Not Supported 00:10:26.456 Virtualization Management: Not Supported 00:10:26.456 Doorbell Buffer Config: Supported 00:10:26.456 Get LBA Status Capability: Not Supported 00:10:26.456 Command & Feature Lockdown Capability: Not Supported 00:10:26.456 Abort Command Limit: 4 00:10:26.456 Async Event Request Limit: 4 00:10:26.456 Number of Firmware Slots: N/A 00:10:26.456 Firmware Slot 1 Read-Only: N/A 00:10:26.456 Firmware Activation Without Reset: N/A 00:10:26.456 Multiple Update Detection Support: N/A 00:10:26.456 Firmware Update Granularity: No Information Provided 00:10:26.456 Per-Namespace SMART Log: Yes 00:10:26.456 Asymmetric Namespace Access Log Page: Not Supported 00:10:26.456 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:26.456 Command Effects Log Page: Supported 00:10:26.456 Get Log Page Extended Data: Supported 00:10:26.456 Telemetry Log Pages: Not Supported 00:10:26.456 Persistent Event Log Pages: Not Supported 00:10:26.456 Supported Log Pages Log Page: May Support 00:10:26.456 Commands Supported & Effects Log Page: Not Supported 00:10:26.456 Feature Identifiers & Effects Log Page:May Support 00:10:26.456 NVMe-MI Commands & Effects Log Page: May Support 00:10:26.456 Data Area 4 for Telemetry Log: Not Supported 00:10:26.456 Error Log Page Entries Supported: 1 00:10:26.456 Keep Alive: Not Supported 00:10:26.456 00:10:26.456 NVM Command Set Attributes 00:10:26.456 ========================== 00:10:26.456 Submission Queue Entry Size 00:10:26.456 Max: 64 00:10:26.456 Min: 64 00:10:26.456 Completion Queue Entry Size 00:10:26.456 Max: 16 00:10:26.456 Min: 16 00:10:26.456 Number of Namespaces: 256 00:10:26.456 Compare Command: Supported 00:10:26.456 Write Uncorrectable Command: Not Supported 00:10:26.456 Dataset Management Command: Supported 00:10:26.456 Write Zeroes Command: Supported 00:10:26.456 Set Features Save Field: Supported 00:10:26.456 Reservations: Not Supported 00:10:26.456 Timestamp: Supported 00:10:26.456 Copy: Supported 00:10:26.456 Volatile Write Cache: Present 00:10:26.456 Atomic Write Unit (Normal): 1 00:10:26.456 Atomic Write Unit (PFail): 1 00:10:26.456 Atomic Compare & Write Unit: 1 00:10:26.456 Fused Compare & Write: Not Supported 00:10:26.456 Scatter-Gather List 00:10:26.456 SGL Command Set: Supported 00:10:26.456 SGL Keyed: Not Supported 00:10:26.456 SGL Bit Bucket Descriptor: Not Supported 00:10:26.456 SGL Metadata Pointer: Not Supported 00:10:26.456 Oversized SGL: Not Supported 00:10:26.456 SGL Metadata Address: Not Supported 00:10:26.456 SGL Offset: Not Supported 00:10:26.456 Transport SGL Data Block: Not Supported 00:10:26.456 Replay Protected Memory Block: Not Supported 00:10:26.456 00:10:26.456 Firmware Slot Information 00:10:26.456 ========================= 00:10:26.456 Active slot: 1 00:10:26.456 Slot 1 Firmware Revision: 1.0 00:10:26.456 00:10:26.456 00:10:26.456 Commands Supported and Effects 00:10:26.456 ============================== 00:10:26.456 Admin Commands 00:10:26.456 -------------- 00:10:26.456 Delete I/O Submission Queue (00h): Supported 00:10:26.456 Create I/O Submission Queue (01h): Supported 00:10:26.456 Get Log Page (02h): Supported 00:10:26.456 Delete I/O Completion Queue (04h): Supported 00:10:26.456 Create I/O Completion Queue (05h): Supported 00:10:26.456 Identify (06h): Supported 00:10:26.456 Abort (08h): Supported 00:10:26.456 Set Features (09h): Supported 00:10:26.456 Get Features (0Ah): Supported 00:10:26.456 Asynchronous Event Request (0Ch): Supported 00:10:26.456 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:26.456 Directive Send (19h): Supported 00:10:26.456 Directive Receive (1Ah): Supported 00:10:26.456 Virtualization Management (1Ch): Supported 00:10:26.456 Doorbell Buffer Config (7Ch): Supported 00:10:26.456 Format NVM (80h): Supported LBA-Change 00:10:26.456 I/O Commands 00:10:26.456 ------------ 00:10:26.456 Flush (00h): Supported LBA-Change 00:10:26.456 Write (01h): Supported LBA-Change 00:10:26.456 Read (02h): Supported 00:10:26.456 Compare (05h): Supported 00:10:26.456 Write Zeroes (08h): Supported LBA-Change 00:10:26.456 Dataset Management (09h): Supported LBA-Change 00:10:26.456 Unknown (0Ch): Supported 00:10:26.456 Unknown (12h): Supported 00:10:26.456 Copy (19h): Supported LBA-Change 00:10:26.456 Unknown (1Dh): Supported LBA-Change 00:10:26.456 00:10:26.456 Error Log 00:10:26.456 ========= 00:10:26.456 00:10:26.456 Arbitration 00:10:26.456 =========== 00:10:26.456 Arbitration Burst: no limit 00:10:26.456 00:10:26.456 Power Management 00:10:26.456 ================ 00:10:26.456 Number of Power States: 1 00:10:26.456 Current Power State: Power State #0 00:10:26.456 Power State #0: 00:10:26.456 Max Power: 25.00 W 00:10:26.456 Non-Operational State: Operational 00:10:26.456 Entry Latency: 16 microseconds 00:10:26.456 Exit Latency: 4 microseconds 00:10:26.456 Relative Read Throughput: 0 00:10:26.456 Relative Read Latency: 0 00:10:26.456 Relative Write Throughput: 0 00:10:26.456 Relative Write Latency: 0 00:10:26.456 Idle Power: Not Reported 00:10:26.456 Active Power: Not Reported 00:10:26.456 Non-Operational Permissive Mode: Not Supported 00:10:26.456 00:10:26.456 Health Information 00:10:26.456 ================== 00:10:26.456 Critical Warnings: 00:10:26.456 Available Spare Space: OK 00:10:26.456 Temperature: OK 00:10:26.456 Device Reliability: OK 00:10:26.456 Read Only: No 00:10:26.456 Volatile Memory Backup: OK 00:10:26.456 Current Temperature: 323 Kelvin (50 Celsius) 00:10:26.456 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:26.456 Available Spare: 0% 00:10:26.456 Available Spare Threshold: 0% 00:10:26.456 Life Percentage Used: 0% 00:10:26.456 Data Units Read: 742 00:10:26.456 Data Units Written: 589 00:10:26.456 Host Read Commands: 34870 00:10:26.456 Host Write Commands: 32548 00:10:26.456 Controller Busy Time: 0 minutes 00:10:26.456 Power Cycles: 0 00:10:26.456 Power On Hours: 0 hours 00:10:26.456 Unsafe Shutdowns: 0 00:10:26.456 Unrecoverable Media Errors: 0 00:10:26.456 Lifetime Error Log Entries: 0 00:10:26.456 Warning Temperature Time: 0 minutes 00:10:26.456 Critical Temperature Time: 0 minutes 00:10:26.456 00:10:26.456 Number of Queues 00:10:26.456 ================ 00:10:26.456 Number of I/O Submission Queues: 64 00:10:26.457 Number of I/O Completion Queues: 64 00:10:26.457 00:10:26.457 ZNS Specific Controller Data 00:10:26.457 ============================ 00:10:26.457 Zone Append Size Limit: 0 00:10:26.457 00:10:26.457 00:10:26.457 Active Namespaces 00:10:26.457 ================= 00:10:26.457 Namespace ID:1 00:10:26.457 Error Recovery Timeout: Unlimited 00:10:26.457 Command Set Identifier: NVM (00h) 00:10:26.457 Deallocate: Supported 00:10:26.457 Deallocated/Unwritten Error: Supported 00:10:26.457 Deallocated Read Value: All 0x00 00:10:26.457 Deallocate in Write Zeroes: Not Supported 00:10:26.457 Deallocated Guard Field: 0xFFFF 00:10:26.457 Flush: Supported 00:10:26.457 Reservation: Not Supported 00:10:26.457 Namespace Sharing Capabilities: Private 00:10:26.457 Size (in LBAs): 1310720 (5GiB) 00:10:26.457 Capacity (in LBAs): 1310720 (5GiB) 00:10:26.457 Utilization (in LBAs): 1310720 (5GiB) 00:10:26.457 Thin Provisioning: Not Supported 00:10:26.457 Per-NS Atomic Units: No 00:10:26.457 Maximum Single Source Range Length: 128 00:10:26.457 Maximum Copy Length: 128 00:10:26.457 Maximum Source Range Count: 128 00:10:26.457 NGUID/EUI64 Never Reused: No 00:10:26.457 Namespace Write Protected: No 00:10:26.457 Number of LBA Formats: 8 00:10:26.457 Current LBA Format: LBA Format #04 00:10:26.457 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:26.457 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:26.457 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:26.457 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:26.457 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:26.457 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:26.457 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:26.457 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:26.457 00:10:26.457 NVM Specific Namespace Data 00:10:26.457 =========================== 00:10:26.457 Logical Block Storage Tag Mask: 0 00:10:26.457 Protection Information Capabilities: 00:10:26.457 16b Guard Protection Information Storage Tag Support: No 00:10:26.457 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:26.457 Storage Tag Check Read Support: No 00:10:26.457 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.457 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.457 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.457 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.457 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.457 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.457 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.457 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.457 21:09:37 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:26.457 21:09:37 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:10:26.717 ===================================================== 00:10:26.717 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:26.717 ===================================================== 00:10:26.717 Controller Capabilities/Features 00:10:26.717 ================================ 00:10:26.717 Vendor ID: 1b36 00:10:26.717 Subsystem Vendor ID: 1af4 00:10:26.717 Serial Number: 12342 00:10:26.717 Model Number: QEMU NVMe Ctrl 00:10:26.717 Firmware Version: 8.0.0 00:10:26.717 Recommended Arb Burst: 6 00:10:26.717 IEEE OUI Identifier: 00 54 52 00:10:26.717 Multi-path I/O 00:10:26.717 May have multiple subsystem ports: No 00:10:26.717 May have multiple controllers: No 00:10:26.717 Associated with SR-IOV VF: No 00:10:26.717 Max Data Transfer Size: 524288 00:10:26.717 Max Number of Namespaces: 256 00:10:26.717 Max Number of I/O Queues: 64 00:10:26.717 NVMe Specification Version (VS): 1.4 00:10:26.717 NVMe Specification Version (Identify): 1.4 00:10:26.717 Maximum Queue Entries: 2048 00:10:26.717 Contiguous Queues Required: Yes 00:10:26.717 Arbitration Mechanisms Supported 00:10:26.717 Weighted Round Robin: Not Supported 00:10:26.717 Vendor Specific: Not Supported 00:10:26.717 Reset Timeout: 7500 ms 00:10:26.717 Doorbell Stride: 4 bytes 00:10:26.717 NVM Subsystem Reset: Not Supported 00:10:26.717 Command Sets Supported 00:10:26.717 NVM Command Set: Supported 00:10:26.717 Boot Partition: Not Supported 00:10:26.717 Memory Page Size Minimum: 4096 bytes 00:10:26.717 Memory Page Size Maximum: 65536 bytes 00:10:26.717 Persistent Memory Region: Not Supported 00:10:26.717 Optional Asynchronous Events Supported 00:10:26.717 Namespace Attribute Notices: Supported 00:10:26.717 Firmware Activation Notices: Not Supported 00:10:26.717 ANA Change Notices: Not Supported 00:10:26.717 PLE Aggregate Log Change Notices: Not Supported 00:10:26.717 LBA Status Info Alert Notices: Not Supported 00:10:26.717 EGE Aggregate Log Change Notices: Not Supported 00:10:26.717 Normal NVM Subsystem Shutdown event: Not Supported 00:10:26.717 Zone Descriptor Change Notices: Not Supported 00:10:26.717 Discovery Log Change Notices: Not Supported 00:10:26.717 Controller Attributes 00:10:26.717 128-bit Host Identifier: Not Supported 00:10:26.717 Non-Operational Permissive Mode: Not Supported 00:10:26.717 NVM Sets: Not Supported 00:10:26.717 Read Recovery Levels: Not Supported 00:10:26.717 Endurance Groups: Not Supported 00:10:26.717 Predictable Latency Mode: Not Supported 00:10:26.717 Traffic Based Keep ALive: Not Supported 00:10:26.717 Namespace Granularity: Not Supported 00:10:26.717 SQ Associations: Not Supported 00:10:26.717 UUID List: Not Supported 00:10:26.717 Multi-Domain Subsystem: Not Supported 00:10:26.717 Fixed Capacity Management: Not Supported 00:10:26.717 Variable Capacity Management: Not Supported 00:10:26.717 Delete Endurance Group: Not Supported 00:10:26.717 Delete NVM Set: Not Supported 00:10:26.717 Extended LBA Formats Supported: Supported 00:10:26.717 Flexible Data Placement Supported: Not Supported 00:10:26.717 00:10:26.717 Controller Memory Buffer Support 00:10:26.717 ================================ 00:10:26.717 Supported: No 00:10:26.717 00:10:26.717 Persistent Memory Region Support 00:10:26.717 ================================ 00:10:26.717 Supported: No 00:10:26.717 00:10:26.717 Admin Command Set Attributes 00:10:26.717 ============================ 00:10:26.717 Security Send/Receive: Not Supported 00:10:26.717 Format NVM: Supported 00:10:26.717 Firmware Activate/Download: Not Supported 00:10:26.717 Namespace Management: Supported 00:10:26.717 Device Self-Test: Not Supported 00:10:26.717 Directives: Supported 00:10:26.717 NVMe-MI: Not Supported 00:10:26.717 Virtualization Management: Not Supported 00:10:26.717 Doorbell Buffer Config: Supported 00:10:26.717 Get LBA Status Capability: Not Supported 00:10:26.717 Command & Feature Lockdown Capability: Not Supported 00:10:26.717 Abort Command Limit: 4 00:10:26.717 Async Event Request Limit: 4 00:10:26.717 Number of Firmware Slots: N/A 00:10:26.717 Firmware Slot 1 Read-Only: N/A 00:10:26.717 Firmware Activation Without Reset: N/A 00:10:26.717 Multiple Update Detection Support: N/A 00:10:26.717 Firmware Update Granularity: No Information Provided 00:10:26.717 Per-Namespace SMART Log: Yes 00:10:26.717 Asymmetric Namespace Access Log Page: Not Supported 00:10:26.717 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:26.717 Command Effects Log Page: Supported 00:10:26.717 Get Log Page Extended Data: Supported 00:10:26.717 Telemetry Log Pages: Not Supported 00:10:26.717 Persistent Event Log Pages: Not Supported 00:10:26.717 Supported Log Pages Log Page: May Support 00:10:26.717 Commands Supported & Effects Log Page: Not Supported 00:10:26.717 Feature Identifiers & Effects Log Page:May Support 00:10:26.717 NVMe-MI Commands & Effects Log Page: May Support 00:10:26.717 Data Area 4 for Telemetry Log: Not Supported 00:10:26.717 Error Log Page Entries Supported: 1 00:10:26.717 Keep Alive: Not Supported 00:10:26.717 00:10:26.717 NVM Command Set Attributes 00:10:26.717 ========================== 00:10:26.717 Submission Queue Entry Size 00:10:26.717 Max: 64 00:10:26.717 Min: 64 00:10:26.717 Completion Queue Entry Size 00:10:26.717 Max: 16 00:10:26.717 Min: 16 00:10:26.717 Number of Namespaces: 256 00:10:26.717 Compare Command: Supported 00:10:26.717 Write Uncorrectable Command: Not Supported 00:10:26.717 Dataset Management Command: Supported 00:10:26.717 Write Zeroes Command: Supported 00:10:26.717 Set Features Save Field: Supported 00:10:26.717 Reservations: Not Supported 00:10:26.717 Timestamp: Supported 00:10:26.717 Copy: Supported 00:10:26.717 Volatile Write Cache: Present 00:10:26.717 Atomic Write Unit (Normal): 1 00:10:26.717 Atomic Write Unit (PFail): 1 00:10:26.717 Atomic Compare & Write Unit: 1 00:10:26.717 Fused Compare & Write: Not Supported 00:10:26.717 Scatter-Gather List 00:10:26.717 SGL Command Set: Supported 00:10:26.717 SGL Keyed: Not Supported 00:10:26.717 SGL Bit Bucket Descriptor: Not Supported 00:10:26.717 SGL Metadata Pointer: Not Supported 00:10:26.717 Oversized SGL: Not Supported 00:10:26.717 SGL Metadata Address: Not Supported 00:10:26.717 SGL Offset: Not Supported 00:10:26.717 Transport SGL Data Block: Not Supported 00:10:26.717 Replay Protected Memory Block: Not Supported 00:10:26.717 00:10:26.717 Firmware Slot Information 00:10:26.717 ========================= 00:10:26.717 Active slot: 1 00:10:26.717 Slot 1 Firmware Revision: 1.0 00:10:26.717 00:10:26.717 00:10:26.717 Commands Supported and Effects 00:10:26.717 ============================== 00:10:26.717 Admin Commands 00:10:26.717 -------------- 00:10:26.717 Delete I/O Submission Queue (00h): Supported 00:10:26.717 Create I/O Submission Queue (01h): Supported 00:10:26.717 Get Log Page (02h): Supported 00:10:26.717 Delete I/O Completion Queue (04h): Supported 00:10:26.717 Create I/O Completion Queue (05h): Supported 00:10:26.717 Identify (06h): Supported 00:10:26.717 Abort (08h): Supported 00:10:26.717 Set Features (09h): Supported 00:10:26.717 Get Features (0Ah): Supported 00:10:26.717 Asynchronous Event Request (0Ch): Supported 00:10:26.717 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:26.717 Directive Send (19h): Supported 00:10:26.717 Directive Receive (1Ah): Supported 00:10:26.717 Virtualization Management (1Ch): Supported 00:10:26.717 Doorbell Buffer Config (7Ch): Supported 00:10:26.717 Format NVM (80h): Supported LBA-Change 00:10:26.717 I/O Commands 00:10:26.717 ------------ 00:10:26.717 Flush (00h): Supported LBA-Change 00:10:26.718 Write (01h): Supported LBA-Change 00:10:26.718 Read (02h): Supported 00:10:26.718 Compare (05h): Supported 00:10:26.718 Write Zeroes (08h): Supported LBA-Change 00:10:26.718 Dataset Management (09h): Supported LBA-Change 00:10:26.718 Unknown (0Ch): Supported 00:10:26.718 Unknown (12h): Supported 00:10:26.718 Copy (19h): Supported LBA-Change 00:10:26.718 Unknown (1Dh): Supported LBA-Change 00:10:26.718 00:10:26.718 Error Log 00:10:26.718 ========= 00:10:26.718 00:10:26.718 Arbitration 00:10:26.718 =========== 00:10:26.718 Arbitration Burst: no limit 00:10:26.718 00:10:26.718 Power Management 00:10:26.718 ================ 00:10:26.718 Number of Power States: 1 00:10:26.718 Current Power State: Power State #0 00:10:26.718 Power State #0: 00:10:26.718 Max Power: 25.00 W 00:10:26.718 Non-Operational State: Operational 00:10:26.718 Entry Latency: 16 microseconds 00:10:26.718 Exit Latency: 4 microseconds 00:10:26.718 Relative Read Throughput: 0 00:10:26.718 Relative Read Latency: 0 00:10:26.718 Relative Write Throughput: 0 00:10:26.718 Relative Write Latency: 0 00:10:26.718 Idle Power: Not Reported 00:10:26.718 Active Power: Not Reported 00:10:26.718 Non-Operational Permissive Mode: Not Supported 00:10:26.718 00:10:26.718 Health Information 00:10:26.718 ================== 00:10:26.718 Critical Warnings: 00:10:26.718 Available Spare Space: OK 00:10:26.718 Temperature: OK 00:10:26.718 Device Reliability: OK 00:10:26.718 Read Only: No 00:10:26.718 Volatile Memory Backup: OK 00:10:26.718 Current Temperature: 323 Kelvin (50 Celsius) 00:10:26.718 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:26.718 Available Spare: 0% 00:10:26.718 Available Spare Threshold: 0% 00:10:26.718 Life Percentage Used: 0% 00:10:26.718 Data Units Read: 2230 00:10:26.718 Data Units Written: 1910 00:10:26.718 Host Read Commands: 102723 00:10:26.718 Host Write Commands: 98493 00:10:26.718 Controller Busy Time: 0 minutes 00:10:26.718 Power Cycles: 0 00:10:26.718 Power On Hours: 0 hours 00:10:26.718 Unsafe Shutdowns: 0 00:10:26.718 Unrecoverable Media Errors: 0 00:10:26.718 Lifetime Error Log Entries: 0 00:10:26.718 Warning Temperature Time: 0 minutes 00:10:26.718 Critical Temperature Time: 0 minutes 00:10:26.718 00:10:26.718 Number of Queues 00:10:26.718 ================ 00:10:26.718 Number of I/O Submission Queues: 64 00:10:26.718 Number of I/O Completion Queues: 64 00:10:26.718 00:10:26.718 ZNS Specific Controller Data 00:10:26.718 ============================ 00:10:26.718 Zone Append Size Limit: 0 00:10:26.718 00:10:26.718 00:10:26.718 Active Namespaces 00:10:26.718 ================= 00:10:26.718 Namespace ID:1 00:10:26.718 Error Recovery Timeout: Unlimited 00:10:26.718 Command Set Identifier: NVM (00h) 00:10:26.718 Deallocate: Supported 00:10:26.718 Deallocated/Unwritten Error: Supported 00:10:26.718 Deallocated Read Value: All 0x00 00:10:26.718 Deallocate in Write Zeroes: Not Supported 00:10:26.718 Deallocated Guard Field: 0xFFFF 00:10:26.718 Flush: Supported 00:10:26.718 Reservation: Not Supported 00:10:26.718 Namespace Sharing Capabilities: Private 00:10:26.718 Size (in LBAs): 1048576 (4GiB) 00:10:26.718 Capacity (in LBAs): 1048576 (4GiB) 00:10:26.718 Utilization (in LBAs): 1048576 (4GiB) 00:10:26.718 Thin Provisioning: Not Supported 00:10:26.718 Per-NS Atomic Units: No 00:10:26.718 Maximum Single Source Range Length: 128 00:10:26.718 Maximum Copy Length: 128 00:10:26.718 Maximum Source Range Count: 128 00:10:26.718 NGUID/EUI64 Never Reused: No 00:10:26.718 Namespace Write Protected: No 00:10:26.718 Number of LBA Formats: 8 00:10:26.718 Current LBA Format: LBA Format #04 00:10:26.718 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:26.718 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:26.718 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:26.718 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:26.718 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:26.718 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:26.718 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:26.718 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:26.718 00:10:26.718 NVM Specific Namespace Data 00:10:26.718 =========================== 00:10:26.718 Logical Block Storage Tag Mask: 0 00:10:26.718 Protection Information Capabilities: 00:10:26.718 16b Guard Protection Information Storage Tag Support: No 00:10:26.718 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:26.718 Storage Tag Check Read Support: No 00:10:26.718 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.718 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.718 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.718 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.718 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.718 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.718 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.718 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.718 Namespace ID:2 00:10:26.718 Error Recovery Timeout: Unlimited 00:10:26.718 Command Set Identifier: NVM (00h) 00:10:26.718 Deallocate: Supported 00:10:26.718 Deallocated/Unwritten Error: Supported 00:10:26.718 Deallocated Read Value: All 0x00 00:10:26.718 Deallocate in Write Zeroes: Not Supported 00:10:26.718 Deallocated Guard Field: 0xFFFF 00:10:26.718 Flush: Supported 00:10:26.718 Reservation: Not Supported 00:10:26.718 Namespace Sharing Capabilities: Private 00:10:26.718 Size (in LBAs): 1048576 (4GiB) 00:10:26.718 Capacity (in LBAs): 1048576 (4GiB) 00:10:26.718 Utilization (in LBAs): 1048576 (4GiB) 00:10:26.718 Thin Provisioning: Not Supported 00:10:26.718 Per-NS Atomic Units: No 00:10:26.718 Maximum Single Source Range Length: 128 00:10:26.718 Maximum Copy Length: 128 00:10:26.718 Maximum Source Range Count: 128 00:10:26.718 NGUID/EUI64 Never Reused: No 00:10:26.718 Namespace Write Protected: No 00:10:26.718 Number of LBA Formats: 8 00:10:26.718 Current LBA Format: LBA Format #04 00:10:26.718 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:26.718 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:26.718 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:26.718 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:26.718 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:26.718 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:26.718 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:26.718 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:26.718 00:10:26.718 NVM Specific Namespace Data 00:10:26.718 =========================== 00:10:26.718 Logical Block Storage Tag Mask: 0 00:10:26.718 Protection Information Capabilities: 00:10:26.718 16b Guard Protection Information Storage Tag Support: No 00:10:26.719 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:26.719 Storage Tag Check Read Support: No 00:10:26.719 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.719 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.719 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.719 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.719 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.719 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.719 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.719 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.719 Namespace ID:3 00:10:26.719 Error Recovery Timeout: Unlimited 00:10:26.719 Command Set Identifier: NVM (00h) 00:10:26.719 Deallocate: Supported 00:10:26.719 Deallocated/Unwritten Error: Supported 00:10:26.719 Deallocated Read Value: All 0x00 00:10:26.719 Deallocate in Write Zeroes: Not Supported 00:10:26.719 Deallocated Guard Field: 0xFFFF 00:10:26.719 Flush: Supported 00:10:26.719 Reservation: Not Supported 00:10:26.719 Namespace Sharing Capabilities: Private 00:10:26.719 Size (in LBAs): 1048576 (4GiB) 00:10:26.719 Capacity (in LBAs): 1048576 (4GiB) 00:10:26.719 Utilization (in LBAs): 1048576 (4GiB) 00:10:26.719 Thin Provisioning: Not Supported 00:10:26.719 Per-NS Atomic Units: No 00:10:26.719 Maximum Single Source Range Length: 128 00:10:26.719 Maximum Copy Length: 128 00:10:26.719 Maximum Source Range Count: 128 00:10:26.719 NGUID/EUI64 Never Reused: No 00:10:26.719 Namespace Write Protected: No 00:10:26.719 Number of LBA Formats: 8 00:10:26.719 Current LBA Format: LBA Format #04 00:10:26.719 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:26.719 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:26.719 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:26.719 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:26.719 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:26.719 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:26.719 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:26.719 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:26.719 00:10:26.719 NVM Specific Namespace Data 00:10:26.719 =========================== 00:10:26.719 Logical Block Storage Tag Mask: 0 00:10:26.719 Protection Information Capabilities: 00:10:26.719 16b Guard Protection Information Storage Tag Support: No 00:10:26.719 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:26.719 Storage Tag Check Read Support: No 00:10:26.719 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.719 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.719 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.719 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.719 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.719 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.719 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.719 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.719 21:09:38 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:26.719 21:09:38 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:10:26.979 ===================================================== 00:10:26.979 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:26.979 ===================================================== 00:10:26.979 Controller Capabilities/Features 00:10:26.979 ================================ 00:10:26.979 Vendor ID: 1b36 00:10:26.979 Subsystem Vendor ID: 1af4 00:10:26.979 Serial Number: 12343 00:10:26.979 Model Number: QEMU NVMe Ctrl 00:10:26.979 Firmware Version: 8.0.0 00:10:26.979 Recommended Arb Burst: 6 00:10:26.979 IEEE OUI Identifier: 00 54 52 00:10:26.979 Multi-path I/O 00:10:26.979 May have multiple subsystem ports: No 00:10:26.979 May have multiple controllers: Yes 00:10:26.979 Associated with SR-IOV VF: No 00:10:26.979 Max Data Transfer Size: 524288 00:10:26.979 Max Number of Namespaces: 256 00:10:26.979 Max Number of I/O Queues: 64 00:10:26.979 NVMe Specification Version (VS): 1.4 00:10:26.979 NVMe Specification Version (Identify): 1.4 00:10:26.979 Maximum Queue Entries: 2048 00:10:26.979 Contiguous Queues Required: Yes 00:10:26.979 Arbitration Mechanisms Supported 00:10:26.979 Weighted Round Robin: Not Supported 00:10:26.979 Vendor Specific: Not Supported 00:10:26.979 Reset Timeout: 7500 ms 00:10:26.979 Doorbell Stride: 4 bytes 00:10:26.979 NVM Subsystem Reset: Not Supported 00:10:26.979 Command Sets Supported 00:10:26.979 NVM Command Set: Supported 00:10:26.979 Boot Partition: Not Supported 00:10:26.979 Memory Page Size Minimum: 4096 bytes 00:10:26.979 Memory Page Size Maximum: 65536 bytes 00:10:26.979 Persistent Memory Region: Not Supported 00:10:26.979 Optional Asynchronous Events Supported 00:10:26.979 Namespace Attribute Notices: Supported 00:10:26.979 Firmware Activation Notices: Not Supported 00:10:26.979 ANA Change Notices: Not Supported 00:10:26.979 PLE Aggregate Log Change Notices: Not Supported 00:10:26.979 LBA Status Info Alert Notices: Not Supported 00:10:26.979 EGE Aggregate Log Change Notices: Not Supported 00:10:26.979 Normal NVM Subsystem Shutdown event: Not Supported 00:10:26.979 Zone Descriptor Change Notices: Not Supported 00:10:26.979 Discovery Log Change Notices: Not Supported 00:10:26.979 Controller Attributes 00:10:26.979 128-bit Host Identifier: Not Supported 00:10:26.979 Non-Operational Permissive Mode: Not Supported 00:10:26.979 NVM Sets: Not Supported 00:10:26.979 Read Recovery Levels: Not Supported 00:10:26.979 Endurance Groups: Supported 00:10:26.979 Predictable Latency Mode: Not Supported 00:10:26.979 Traffic Based Keep ALive: Not Supported 00:10:26.979 Namespace Granularity: Not Supported 00:10:26.979 SQ Associations: Not Supported 00:10:26.979 UUID List: Not Supported 00:10:26.979 Multi-Domain Subsystem: Not Supported 00:10:26.979 Fixed Capacity Management: Not Supported 00:10:26.979 Variable Capacity Management: Not Supported 00:10:26.979 Delete Endurance Group: Not Supported 00:10:26.979 Delete NVM Set: Not Supported 00:10:26.979 Extended LBA Formats Supported: Supported 00:10:26.979 Flexible Data Placement Supported: Supported 00:10:26.979 00:10:26.979 Controller Memory Buffer Support 00:10:26.979 ================================ 00:10:26.979 Supported: No 00:10:26.979 00:10:26.979 Persistent Memory Region Support 00:10:26.979 ================================ 00:10:26.979 Supported: No 00:10:26.979 00:10:26.979 Admin Command Set Attributes 00:10:26.979 ============================ 00:10:26.979 Security Send/Receive: Not Supported 00:10:26.979 Format NVM: Supported 00:10:26.979 Firmware Activate/Download: Not Supported 00:10:26.979 Namespace Management: Supported 00:10:26.979 Device Self-Test: Not Supported 00:10:26.979 Directives: Supported 00:10:26.979 NVMe-MI: Not Supported 00:10:26.979 Virtualization Management: Not Supported 00:10:26.980 Doorbell Buffer Config: Supported 00:10:26.980 Get LBA Status Capability: Not Supported 00:10:26.980 Command & Feature Lockdown Capability: Not Supported 00:10:26.980 Abort Command Limit: 4 00:10:26.980 Async Event Request Limit: 4 00:10:26.980 Number of Firmware Slots: N/A 00:10:26.980 Firmware Slot 1 Read-Only: N/A 00:10:26.980 Firmware Activation Without Reset: N/A 00:10:26.980 Multiple Update Detection Support: N/A 00:10:26.980 Firmware Update Granularity: No Information Provided 00:10:26.980 Per-Namespace SMART Log: Yes 00:10:26.980 Asymmetric Namespace Access Log Page: Not Supported 00:10:26.980 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:26.980 Command Effects Log Page: Supported 00:10:26.980 Get Log Page Extended Data: Supported 00:10:26.980 Telemetry Log Pages: Not Supported 00:10:26.980 Persistent Event Log Pages: Not Supported 00:10:26.980 Supported Log Pages Log Page: May Support 00:10:26.980 Commands Supported & Effects Log Page: Not Supported 00:10:26.980 Feature Identifiers & Effects Log Page:May Support 00:10:26.980 NVMe-MI Commands & Effects Log Page: May Support 00:10:26.980 Data Area 4 for Telemetry Log: Not Supported 00:10:26.980 Error Log Page Entries Supported: 1 00:10:26.980 Keep Alive: Not Supported 00:10:26.980 00:10:26.980 NVM Command Set Attributes 00:10:26.980 ========================== 00:10:26.980 Submission Queue Entry Size 00:10:26.980 Max: 64 00:10:26.980 Min: 64 00:10:26.980 Completion Queue Entry Size 00:10:26.980 Max: 16 00:10:26.980 Min: 16 00:10:26.980 Number of Namespaces: 256 00:10:26.980 Compare Command: Supported 00:10:26.980 Write Uncorrectable Command: Not Supported 00:10:26.980 Dataset Management Command: Supported 00:10:26.980 Write Zeroes Command: Supported 00:10:26.980 Set Features Save Field: Supported 00:10:26.980 Reservations: Not Supported 00:10:26.980 Timestamp: Supported 00:10:26.980 Copy: Supported 00:10:26.980 Volatile Write Cache: Present 00:10:26.980 Atomic Write Unit (Normal): 1 00:10:26.980 Atomic Write Unit (PFail): 1 00:10:26.980 Atomic Compare & Write Unit: 1 00:10:26.980 Fused Compare & Write: Not Supported 00:10:26.980 Scatter-Gather List 00:10:26.980 SGL Command Set: Supported 00:10:26.980 SGL Keyed: Not Supported 00:10:26.980 SGL Bit Bucket Descriptor: Not Supported 00:10:26.980 SGL Metadata Pointer: Not Supported 00:10:26.980 Oversized SGL: Not Supported 00:10:26.980 SGL Metadata Address: Not Supported 00:10:26.980 SGL Offset: Not Supported 00:10:26.980 Transport SGL Data Block: Not Supported 00:10:26.980 Replay Protected Memory Block: Not Supported 00:10:26.980 00:10:26.980 Firmware Slot Information 00:10:26.980 ========================= 00:10:26.980 Active slot: 1 00:10:26.980 Slot 1 Firmware Revision: 1.0 00:10:26.980 00:10:26.980 00:10:26.980 Commands Supported and Effects 00:10:26.980 ============================== 00:10:26.980 Admin Commands 00:10:26.980 -------------- 00:10:26.980 Delete I/O Submission Queue (00h): Supported 00:10:26.980 Create I/O Submission Queue (01h): Supported 00:10:26.980 Get Log Page (02h): Supported 00:10:26.980 Delete I/O Completion Queue (04h): Supported 00:10:26.980 Create I/O Completion Queue (05h): Supported 00:10:26.980 Identify (06h): Supported 00:10:26.980 Abort (08h): Supported 00:10:26.980 Set Features (09h): Supported 00:10:26.980 Get Features (0Ah): Supported 00:10:26.980 Asynchronous Event Request (0Ch): Supported 00:10:26.980 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:26.980 Directive Send (19h): Supported 00:10:26.980 Directive Receive (1Ah): Supported 00:10:26.980 Virtualization Management (1Ch): Supported 00:10:26.980 Doorbell Buffer Config (7Ch): Supported 00:10:26.980 Format NVM (80h): Supported LBA-Change 00:10:26.980 I/O Commands 00:10:26.980 ------------ 00:10:26.980 Flush (00h): Supported LBA-Change 00:10:26.980 Write (01h): Supported LBA-Change 00:10:26.980 Read (02h): Supported 00:10:26.980 Compare (05h): Supported 00:10:26.980 Write Zeroes (08h): Supported LBA-Change 00:10:26.980 Dataset Management (09h): Supported LBA-Change 00:10:26.980 Unknown (0Ch): Supported 00:10:26.980 Unknown (12h): Supported 00:10:26.980 Copy (19h): Supported LBA-Change 00:10:26.980 Unknown (1Dh): Supported LBA-Change 00:10:26.980 00:10:26.980 Error Log 00:10:26.980 ========= 00:10:26.980 00:10:26.980 Arbitration 00:10:26.980 =========== 00:10:26.980 Arbitration Burst: no limit 00:10:26.980 00:10:26.980 Power Management 00:10:26.980 ================ 00:10:26.980 Number of Power States: 1 00:10:26.980 Current Power State: Power State #0 00:10:26.980 Power State #0: 00:10:26.980 Max Power: 25.00 W 00:10:26.980 Non-Operational State: Operational 00:10:26.980 Entry Latency: 16 microseconds 00:10:26.980 Exit Latency: 4 microseconds 00:10:26.980 Relative Read Throughput: 0 00:10:26.980 Relative Read Latency: 0 00:10:26.980 Relative Write Throughput: 0 00:10:26.980 Relative Write Latency: 0 00:10:26.980 Idle Power: Not Reported 00:10:26.980 Active Power: Not Reported 00:10:26.980 Non-Operational Permissive Mode: Not Supported 00:10:26.980 00:10:26.980 Health Information 00:10:26.980 ================== 00:10:26.980 Critical Warnings: 00:10:26.980 Available Spare Space: OK 00:10:26.980 Temperature: OK 00:10:26.980 Device Reliability: OK 00:10:26.980 Read Only: No 00:10:26.980 Volatile Memory Backup: OK 00:10:26.980 Current Temperature: 323 Kelvin (50 Celsius) 00:10:26.980 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:26.980 Available Spare: 0% 00:10:26.980 Available Spare Threshold: 0% 00:10:26.980 Life Percentage Used: 0% 00:10:26.980 Data Units Read: 823 00:10:26.980 Data Units Written: 716 00:10:26.980 Host Read Commands: 34899 00:10:26.980 Host Write Commands: 33489 00:10:26.980 Controller Busy Time: 0 minutes 00:10:26.980 Power Cycles: 0 00:10:26.980 Power On Hours: 0 hours 00:10:26.980 Unsafe Shutdowns: 0 00:10:26.980 Unrecoverable Media Errors: 0 00:10:26.980 Lifetime Error Log Entries: 0 00:10:26.980 Warning Temperature Time: 0 minutes 00:10:26.980 Critical Temperature Time: 0 minutes 00:10:26.980 00:10:26.980 Number of Queues 00:10:26.980 ================ 00:10:26.980 Number of I/O Submission Queues: 64 00:10:26.980 Number of I/O Completion Queues: 64 00:10:26.980 00:10:26.980 ZNS Specific Controller Data 00:10:26.980 ============================ 00:10:26.980 Zone Append Size Limit: 0 00:10:26.980 00:10:26.980 00:10:26.980 Active Namespaces 00:10:26.980 ================= 00:10:26.980 Namespace ID:1 00:10:26.980 Error Recovery Timeout: Unlimited 00:10:26.980 Command Set Identifier: NVM (00h) 00:10:26.980 Deallocate: Supported 00:10:26.980 Deallocated/Unwritten Error: Supported 00:10:26.980 Deallocated Read Value: All 0x00 00:10:26.980 Deallocate in Write Zeroes: Not Supported 00:10:26.980 Deallocated Guard Field: 0xFFFF 00:10:26.980 Flush: Supported 00:10:26.980 Reservation: Not Supported 00:10:26.980 Namespace Sharing Capabilities: Multiple Controllers 00:10:26.980 Size (in LBAs): 262144 (1GiB) 00:10:26.980 Capacity (in LBAs): 262144 (1GiB) 00:10:26.980 Utilization (in LBAs): 262144 (1GiB) 00:10:26.980 Thin Provisioning: Not Supported 00:10:26.980 Per-NS Atomic Units: No 00:10:26.980 Maximum Single Source Range Length: 128 00:10:26.980 Maximum Copy Length: 128 00:10:26.980 Maximum Source Range Count: 128 00:10:26.980 NGUID/EUI64 Never Reused: No 00:10:26.980 Namespace Write Protected: No 00:10:26.980 Endurance group ID: 1 00:10:26.980 Number of LBA Formats: 8 00:10:26.980 Current LBA Format: LBA Format #04 00:10:26.980 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:26.980 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:26.980 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:26.980 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:26.980 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:26.980 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:26.980 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:26.980 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:26.980 00:10:26.980 Get Feature FDP: 00:10:26.980 ================ 00:10:26.980 Enabled: Yes 00:10:26.980 FDP configuration index: 0 00:10:26.980 00:10:26.980 FDP configurations log page 00:10:26.980 =========================== 00:10:26.980 Number of FDP configurations: 1 00:10:26.980 Version: 0 00:10:26.980 Size: 112 00:10:26.980 FDP Configuration Descriptor: 0 00:10:26.980 Descriptor Size: 96 00:10:26.980 Reclaim Group Identifier format: 2 00:10:26.980 FDP Volatile Write Cache: Not Present 00:10:26.980 FDP Configuration: Valid 00:10:26.980 Vendor Specific Size: 0 00:10:26.980 Number of Reclaim Groups: 2 00:10:26.980 Number of Recalim Unit Handles: 8 00:10:26.980 Max Placement Identifiers: 128 00:10:26.980 Number of Namespaces Suppprted: 256 00:10:26.980 Reclaim unit Nominal Size: 6000000 bytes 00:10:26.980 Estimated Reclaim Unit Time Limit: Not Reported 00:10:26.980 RUH Desc #000: RUH Type: Initially Isolated 00:10:26.980 RUH Desc #001: RUH Type: Initially Isolated 00:10:26.980 RUH Desc #002: RUH Type: Initially Isolated 00:10:26.980 RUH Desc #003: RUH Type: Initially Isolated 00:10:26.980 RUH Desc #004: RUH Type: Initially Isolated 00:10:26.981 RUH Desc #005: RUH Type: Initially Isolated 00:10:26.981 RUH Desc #006: RUH Type: Initially Isolated 00:10:26.981 RUH Desc #007: RUH Type: Initially Isolated 00:10:26.981 00:10:26.981 FDP reclaim unit handle usage log page 00:10:26.981 ====================================== 00:10:26.981 Number of Reclaim Unit Handles: 8 00:10:26.981 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:26.981 RUH Usage Desc #001: RUH Attributes: Unused 00:10:26.981 RUH Usage Desc #002: RUH Attributes: Unused 00:10:26.981 RUH Usage Desc #003: RUH Attributes: Unused 00:10:26.981 RUH Usage Desc #004: RUH Attributes: Unused 00:10:26.981 RUH Usage Desc #005: RUH Attributes: Unused 00:10:26.981 RUH Usage Desc #006: RUH Attributes: Unused 00:10:26.981 RUH Usage Desc #007: RUH Attributes: Unused 00:10:26.981 00:10:26.981 FDP statistics log page 00:10:26.981 ======================= 00:10:26.981 Host bytes with metadata written: 446865408 00:10:26.981 Media bytes with metadata written: 446918656 00:10:26.981 Media bytes erased: 0 00:10:26.981 00:10:26.981 FDP events log page 00:10:26.981 =================== 00:10:26.981 Number of FDP events: 0 00:10:26.981 00:10:26.981 NVM Specific Namespace Data 00:10:26.981 =========================== 00:10:26.981 Logical Block Storage Tag Mask: 0 00:10:26.981 Protection Information Capabilities: 00:10:26.981 16b Guard Protection Information Storage Tag Support: No 00:10:26.981 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:26.981 Storage Tag Check Read Support: No 00:10:26.981 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.981 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.981 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.981 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.981 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.981 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.981 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.981 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:26.981 00:10:26.981 real 0m1.607s 00:10:26.981 user 0m0.675s 00:10:26.981 sys 0m0.732s 00:10:26.981 21:09:38 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:26.981 21:09:38 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:10:26.981 ************************************ 00:10:26.981 END TEST nvme_identify 00:10:26.981 ************************************ 00:10:26.981 21:09:38 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:26.981 21:09:38 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:10:26.981 21:09:38 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:26.981 21:09:38 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.981 21:09:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:27.239 ************************************ 00:10:27.239 START TEST nvme_perf 00:10:27.239 ************************************ 00:10:27.239 21:09:38 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:10:27.239 21:09:38 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:10:28.705 Initializing NVMe Controllers 00:10:28.705 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:28.705 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:28.705 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:28.705 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:28.706 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:28.706 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:28.706 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:28.706 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:28.706 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:28.706 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:28.706 Initialization complete. Launching workers. 00:10:28.706 ======================================================== 00:10:28.706 Latency(us) 00:10:28.706 Device Information : IOPS MiB/s Average min max 00:10:28.706 PCIE (0000:00:10.0) NSID 1 from core 0: 13337.93 156.30 9608.90 7821.43 42284.22 00:10:28.706 PCIE (0000:00:11.0) NSID 1 from core 0: 13337.93 156.30 9586.88 7879.93 39866.30 00:10:28.706 PCIE (0000:00:13.0) NSID 1 from core 0: 13337.93 156.30 9562.47 7796.12 37567.13 00:10:28.706 PCIE (0000:00:12.0) NSID 1 from core 0: 13337.93 156.30 9537.98 7889.15 34860.07 00:10:28.706 PCIE (0000:00:12.0) NSID 2 from core 0: 13337.93 156.30 9513.35 7878.73 32219.06 00:10:28.706 PCIE (0000:00:12.0) NSID 3 from core 0: 13337.93 156.30 9488.39 7904.95 29486.13 00:10:28.706 ======================================================== 00:10:28.706 Total : 80027.60 937.82 9549.66 7796.12 42284.22 00:10:28.706 00:10:28.706 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:28.706 ================================================================================= 00:10:28.706 1.00000% : 8043.055us 00:10:28.706 10.00000% : 8340.945us 00:10:28.706 25.00000% : 8698.415us 00:10:28.706 50.00000% : 9234.618us 00:10:28.706 75.00000% : 9949.556us 00:10:28.706 90.00000% : 10485.760us 00:10:28.706 95.00000% : 11141.120us 00:10:28.706 98.00000% : 12511.418us 00:10:28.706 99.00000% : 13702.982us 00:10:28.706 99.50000% : 34555.345us 00:10:28.706 99.90000% : 41943.040us 00:10:28.706 99.99000% : 42419.665us 00:10:28.706 99.99900% : 42419.665us 00:10:28.706 99.99990% : 42419.665us 00:10:28.706 99.99999% : 42419.665us 00:10:28.706 00:10:28.706 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:28.706 ================================================================================= 00:10:28.706 1.00000% : 8102.633us 00:10:28.706 10.00000% : 8400.524us 00:10:28.706 25.00000% : 8698.415us 00:10:28.706 50.00000% : 9234.618us 00:10:28.706 75.00000% : 9889.978us 00:10:28.706 90.00000% : 10426.182us 00:10:28.706 95.00000% : 11081.542us 00:10:28.706 98.00000% : 12630.575us 00:10:28.706 99.00000% : 13524.247us 00:10:28.706 99.50000% : 32410.531us 00:10:28.706 99.90000% : 39559.913us 00:10:28.706 99.99000% : 40036.538us 00:10:28.706 99.99900% : 40036.538us 00:10:28.706 99.99990% : 40036.538us 00:10:28.706 99.99999% : 40036.538us 00:10:28.706 00:10:28.706 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:28.706 ================================================================================= 00:10:28.706 1.00000% : 8102.633us 00:10:28.706 10.00000% : 8400.524us 00:10:28.706 25.00000% : 8698.415us 00:10:28.706 50.00000% : 9234.618us 00:10:28.706 75.00000% : 9889.978us 00:10:28.706 90.00000% : 10426.182us 00:10:28.706 95.00000% : 11141.120us 00:10:28.706 98.00000% : 12749.731us 00:10:28.706 99.00000% : 13762.560us 00:10:28.706 99.50000% : 30265.716us 00:10:28.706 99.90000% : 37176.785us 00:10:28.706 99.99000% : 37653.411us 00:10:28.706 99.99900% : 37653.411us 00:10:28.706 99.99990% : 37653.411us 00:10:28.706 99.99999% : 37653.411us 00:10:28.706 00:10:28.706 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:28.706 ================================================================================= 00:10:28.706 1.00000% : 8102.633us 00:10:28.706 10.00000% : 8400.524us 00:10:28.706 25.00000% : 8698.415us 00:10:28.706 50.00000% : 9234.618us 00:10:28.706 75.00000% : 9889.978us 00:10:28.706 90.00000% : 10426.182us 00:10:28.706 95.00000% : 11081.542us 00:10:28.706 98.00000% : 12511.418us 00:10:28.706 99.00000% : 13822.138us 00:10:28.706 99.50000% : 27763.433us 00:10:28.706 99.90000% : 34555.345us 00:10:28.706 99.99000% : 35031.971us 00:10:28.706 99.99900% : 35031.971us 00:10:28.706 99.99990% : 35031.971us 00:10:28.706 99.99999% : 35031.971us 00:10:28.706 00:10:28.706 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:28.706 ================================================================================= 00:10:28.706 1.00000% : 8102.633us 00:10:28.706 10.00000% : 8400.524us 00:10:28.706 25.00000% : 8698.415us 00:10:28.706 50.00000% : 9234.618us 00:10:28.706 75.00000% : 9889.978us 00:10:28.706 90.00000% : 10426.182us 00:10:28.706 95.00000% : 11081.542us 00:10:28.706 98.00000% : 12332.684us 00:10:28.706 99.00000% : 13881.716us 00:10:28.706 99.50000% : 25141.993us 00:10:28.706 99.90000% : 31933.905us 00:10:28.706 99.99000% : 32410.531us 00:10:28.706 99.99900% : 32410.531us 00:10:28.706 99.99990% : 32410.531us 00:10:28.706 99.99999% : 32410.531us 00:10:28.706 00:10:28.706 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:28.706 ================================================================================= 00:10:28.706 1.00000% : 8102.633us 00:10:28.706 10.00000% : 8400.524us 00:10:28.706 25.00000% : 8698.415us 00:10:28.706 50.00000% : 9234.618us 00:10:28.706 75.00000% : 9889.978us 00:10:28.706 90.00000% : 10426.182us 00:10:28.706 95.00000% : 11081.542us 00:10:28.706 98.00000% : 12511.418us 00:10:28.706 99.00000% : 13762.560us 00:10:28.706 99.50000% : 22520.553us 00:10:28.706 99.90000% : 29074.153us 00:10:28.706 99.99000% : 29550.778us 00:10:28.706 99.99900% : 29550.778us 00:10:28.706 99.99990% : 29550.778us 00:10:28.706 99.99999% : 29550.778us 00:10:28.706 00:10:28.706 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:28.706 ============================================================================== 00:10:28.706 Range in us Cumulative IO count 00:10:28.706 7804.742 - 7864.320: 0.0523% ( 7) 00:10:28.706 7864.320 - 7923.898: 0.3738% ( 43) 00:10:28.706 7923.898 - 7983.476: 0.8971% ( 70) 00:10:28.706 7983.476 - 8043.055: 1.7419% ( 113) 00:10:28.706 8043.055 - 8102.633: 2.8708% ( 151) 00:10:28.706 8102.633 - 8162.211: 4.5081% ( 219) 00:10:28.706 8162.211 - 8221.789: 6.4519% ( 260) 00:10:28.706 8221.789 - 8281.367: 8.5825% ( 285) 00:10:28.706 8281.367 - 8340.945: 10.7132% ( 285) 00:10:28.706 8340.945 - 8400.524: 13.2102% ( 334) 00:10:28.706 8400.524 - 8460.102: 15.6848% ( 331) 00:10:28.706 8460.102 - 8519.680: 18.2117% ( 338) 00:10:28.706 8519.680 - 8579.258: 20.7312% ( 337) 00:10:28.706 8579.258 - 8638.836: 23.3553% ( 351) 00:10:28.706 8638.836 - 8698.415: 25.9121% ( 342) 00:10:28.706 8698.415 - 8757.993: 28.5661% ( 355) 00:10:28.706 8757.993 - 8817.571: 31.3771% ( 376) 00:10:28.706 8817.571 - 8877.149: 34.2703% ( 387) 00:10:28.706 8877.149 - 8936.727: 37.2757% ( 402) 00:10:28.706 8936.727 - 8996.305: 40.2661% ( 400) 00:10:28.706 8996.305 - 9055.884: 43.2117% ( 394) 00:10:28.706 9055.884 - 9115.462: 46.0526% ( 380) 00:10:28.706 9115.462 - 9175.040: 48.7590% ( 362) 00:10:28.706 9175.040 - 9234.618: 51.1737% ( 323) 00:10:28.706 9234.618 - 9294.196: 53.4988% ( 311) 00:10:28.706 9294.196 - 9353.775: 55.8762% ( 318) 00:10:28.706 9353.775 - 9413.353: 58.1115% ( 299) 00:10:28.706 9413.353 - 9472.931: 60.4740% ( 316) 00:10:28.706 9472.931 - 9532.509: 62.6346% ( 289) 00:10:28.706 9532.509 - 9592.087: 64.8251% ( 293) 00:10:28.706 9592.087 - 9651.665: 66.9034% ( 278) 00:10:28.706 9651.665 - 9711.244: 68.9818% ( 278) 00:10:28.706 9711.244 - 9770.822: 71.0377% ( 275) 00:10:28.706 9770.822 - 9830.400: 72.9590% ( 257) 00:10:28.706 9830.400 - 9889.978: 74.9028% ( 260) 00:10:28.706 9889.978 - 9949.556: 76.7868% ( 252) 00:10:28.706 9949.556 - 10009.135: 78.5586% ( 237) 00:10:28.706 10009.135 - 10068.713: 80.4650% ( 255) 00:10:28.706 10068.713 - 10128.291: 82.0649% ( 214) 00:10:28.706 10128.291 - 10187.869: 83.8218% ( 235) 00:10:28.706 10187.869 - 10247.447: 85.2871% ( 196) 00:10:28.706 10247.447 - 10307.025: 86.7001% ( 189) 00:10:28.706 10307.025 - 10366.604: 87.9859% ( 172) 00:10:28.706 10366.604 - 10426.182: 89.1148% ( 151) 00:10:28.706 10426.182 - 10485.760: 90.1540% ( 139) 00:10:28.706 10485.760 - 10545.338: 91.1184% ( 129) 00:10:28.706 10545.338 - 10604.916: 91.9333% ( 109) 00:10:28.706 10604.916 - 10664.495: 92.5239% ( 79) 00:10:28.706 10664.495 - 10724.073: 92.9725% ( 60) 00:10:28.706 10724.073 - 10783.651: 93.3762% ( 54) 00:10:28.706 10783.651 - 10843.229: 93.6977% ( 43) 00:10:28.706 10843.229 - 10902.807: 94.0117% ( 42) 00:10:28.706 10902.807 - 10962.385: 94.2658% ( 34) 00:10:28.706 10962.385 - 11021.964: 94.5051% ( 32) 00:10:28.706 11021.964 - 11081.542: 94.7817% ( 37) 00:10:28.706 11081.542 - 11141.120: 95.0284% ( 33) 00:10:28.706 11141.120 - 11200.698: 95.2975% ( 36) 00:10:28.706 11200.698 - 11260.276: 95.5742% ( 37) 00:10:28.706 11260.276 - 11319.855: 95.8134% ( 32) 00:10:28.706 11319.855 - 11379.433: 96.0676% ( 34) 00:10:28.706 11379.433 - 11439.011: 96.2769% ( 28) 00:10:28.706 11439.011 - 11498.589: 96.4339% ( 21) 00:10:28.706 11498.589 - 11558.167: 96.6059% ( 23) 00:10:28.706 11558.167 - 11617.745: 96.7255% ( 16) 00:10:28.706 11617.745 - 11677.324: 96.8301% ( 14) 00:10:28.706 11677.324 - 11736.902: 96.8900% ( 8) 00:10:28.706 11736.902 - 11796.480: 96.9946% ( 14) 00:10:28.706 11796.480 - 11856.058: 97.0993% ( 14) 00:10:28.706 11856.058 - 11915.636: 97.1965% ( 13) 00:10:28.706 11915.636 - 11975.215: 97.2937% ( 13) 00:10:28.706 11975.215 - 12034.793: 97.3983% ( 14) 00:10:28.706 12034.793 - 12094.371: 97.5179% ( 16) 00:10:28.706 12094.371 - 12153.949: 97.6151% ( 13) 00:10:28.706 12153.949 - 12213.527: 97.7123% ( 13) 00:10:28.706 12213.527 - 12273.105: 97.7796% ( 9) 00:10:28.706 12273.105 - 12332.684: 97.8768% ( 13) 00:10:28.706 12332.684 - 12392.262: 97.9291% ( 7) 00:10:28.706 12392.262 - 12451.840: 97.9889% ( 8) 00:10:28.706 12451.840 - 12511.418: 98.0712% ( 11) 00:10:28.706 12511.418 - 12570.996: 98.1534% ( 11) 00:10:28.706 12570.996 - 12630.575: 98.1983% ( 6) 00:10:28.706 12630.575 - 12690.153: 98.2656% ( 9) 00:10:28.706 12690.153 - 12749.731: 98.3254% ( 8) 00:10:28.706 12749.731 - 12809.309: 98.3852% ( 8) 00:10:28.706 12809.309 - 12868.887: 98.4375% ( 7) 00:10:28.706 12868.887 - 12928.465: 98.5048% ( 9) 00:10:28.706 12928.465 - 12988.044: 98.5795% ( 10) 00:10:28.706 12988.044 - 13047.622: 98.6244% ( 6) 00:10:28.706 13047.622 - 13107.200: 98.6693% ( 6) 00:10:28.706 13107.200 - 13166.778: 98.7216% ( 7) 00:10:28.706 13166.778 - 13226.356: 98.7590% ( 5) 00:10:28.706 13226.356 - 13285.935: 98.8038% ( 6) 00:10:28.706 13285.935 - 13345.513: 98.8487% ( 6) 00:10:28.706 13345.513 - 13405.091: 98.8861% ( 5) 00:10:28.706 13405.091 - 13464.669: 98.9160% ( 4) 00:10:28.706 13464.669 - 13524.247: 98.9384% ( 3) 00:10:28.706 13524.247 - 13583.825: 98.9533% ( 2) 00:10:28.706 13583.825 - 13643.404: 98.9833% ( 4) 00:10:28.707 13643.404 - 13702.982: 99.0057% ( 3) 00:10:28.707 13702.982 - 13762.560: 99.0206% ( 2) 00:10:28.707 13762.560 - 13822.138: 99.0431% ( 3) 00:10:28.707 31695.593 - 31933.905: 99.0505% ( 1) 00:10:28.707 31933.905 - 32172.218: 99.0804% ( 4) 00:10:28.707 32172.218 - 32410.531: 99.1253% ( 6) 00:10:28.707 32410.531 - 32648.844: 99.1627% ( 5) 00:10:28.707 32648.844 - 32887.156: 99.2150% ( 7) 00:10:28.707 32887.156 - 33125.469: 99.2524% ( 5) 00:10:28.707 33125.469 - 33363.782: 99.2972% ( 6) 00:10:28.707 33363.782 - 33602.095: 99.3421% ( 6) 00:10:28.707 33602.095 - 33840.407: 99.3870% ( 6) 00:10:28.707 33840.407 - 34078.720: 99.4169% ( 4) 00:10:28.707 34078.720 - 34317.033: 99.4692% ( 7) 00:10:28.707 34317.033 - 34555.345: 99.5066% ( 5) 00:10:28.707 34555.345 - 34793.658: 99.5215% ( 2) 00:10:28.707 39559.913 - 39798.225: 99.5440% ( 3) 00:10:28.707 39798.225 - 40036.538: 99.5813% ( 5) 00:10:28.707 40036.538 - 40274.851: 99.6337% ( 7) 00:10:28.707 40274.851 - 40513.164: 99.6711% ( 5) 00:10:28.707 40513.164 - 40751.476: 99.7234% ( 7) 00:10:28.707 40751.476 - 40989.789: 99.7608% ( 5) 00:10:28.707 40989.789 - 41228.102: 99.8056% ( 6) 00:10:28.707 41228.102 - 41466.415: 99.8505% ( 6) 00:10:28.707 41466.415 - 41704.727: 99.8879% ( 5) 00:10:28.707 41704.727 - 41943.040: 99.9402% ( 7) 00:10:28.707 41943.040 - 42181.353: 99.9701% ( 4) 00:10:28.707 42181.353 - 42419.665: 100.0000% ( 4) 00:10:28.707 00:10:28.707 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:28.707 ============================================================================== 00:10:28.707 Range in us Cumulative IO count 00:10:28.707 7864.320 - 7923.898: 0.0449% ( 6) 00:10:28.707 7923.898 - 7983.476: 0.2467% ( 27) 00:10:28.707 7983.476 - 8043.055: 0.7028% ( 61) 00:10:28.707 8043.055 - 8102.633: 1.4055% ( 94) 00:10:28.707 8102.633 - 8162.211: 2.5643% ( 155) 00:10:28.707 8162.211 - 8221.789: 4.1717% ( 215) 00:10:28.707 8221.789 - 8281.367: 6.3696% ( 294) 00:10:28.707 8281.367 - 8340.945: 8.6648% ( 307) 00:10:28.707 8340.945 - 8400.524: 11.2291% ( 343) 00:10:28.707 8400.524 - 8460.102: 14.0849% ( 382) 00:10:28.707 8460.102 - 8519.680: 17.0006% ( 390) 00:10:28.707 8519.680 - 8579.258: 19.9686% ( 397) 00:10:28.707 8579.258 - 8638.836: 22.9217% ( 395) 00:10:28.707 8638.836 - 8698.415: 25.9943% ( 411) 00:10:28.707 8698.415 - 8757.993: 29.0745% ( 412) 00:10:28.707 8757.993 - 8817.571: 32.2294% ( 422) 00:10:28.707 8817.571 - 8877.149: 35.3843% ( 422) 00:10:28.707 8877.149 - 8936.727: 38.3597% ( 398) 00:10:28.707 8936.727 - 8996.305: 41.3651% ( 402) 00:10:28.707 8996.305 - 9055.884: 44.0939% ( 365) 00:10:28.707 9055.884 - 9115.462: 46.5535% ( 329) 00:10:28.707 9115.462 - 9175.040: 48.6693% ( 283) 00:10:28.707 9175.040 - 9234.618: 50.6728% ( 268) 00:10:28.707 9234.618 - 9294.196: 52.6092% ( 259) 00:10:28.707 9294.196 - 9353.775: 54.6053% ( 267) 00:10:28.707 9353.775 - 9413.353: 56.8556% ( 301) 00:10:28.707 9413.353 - 9472.931: 59.1657% ( 309) 00:10:28.707 9472.931 - 9532.509: 61.5879% ( 324) 00:10:28.707 9532.509 - 9592.087: 64.0251% ( 326) 00:10:28.707 9592.087 - 9651.665: 66.4100% ( 319) 00:10:28.707 9651.665 - 9711.244: 68.8397% ( 325) 00:10:28.707 9711.244 - 9770.822: 71.2022% ( 316) 00:10:28.707 9770.822 - 9830.400: 73.5571% ( 315) 00:10:28.707 9830.400 - 9889.978: 75.7700% ( 296) 00:10:28.707 9889.978 - 9949.556: 77.8858% ( 283) 00:10:28.707 9949.556 - 10009.135: 79.8968% ( 269) 00:10:28.707 10009.135 - 10068.713: 81.8032% ( 255) 00:10:28.707 10068.713 - 10128.291: 83.6872% ( 252) 00:10:28.707 10128.291 - 10187.869: 85.4366% ( 234) 00:10:28.707 10187.869 - 10247.447: 87.0514% ( 216) 00:10:28.707 10247.447 - 10307.025: 88.5093% ( 195) 00:10:28.707 10307.025 - 10366.604: 89.6008% ( 146) 00:10:28.707 10366.604 - 10426.182: 90.6848% ( 145) 00:10:28.707 10426.182 - 10485.760: 91.5221% ( 112) 00:10:28.707 10485.760 - 10545.338: 92.2099% ( 92) 00:10:28.707 10545.338 - 10604.916: 92.6734% ( 62) 00:10:28.707 10604.916 - 10664.495: 93.1220% ( 60) 00:10:28.707 10664.495 - 10724.073: 93.4659% ( 46) 00:10:28.707 10724.073 - 10783.651: 93.7799% ( 42) 00:10:28.707 10783.651 - 10843.229: 94.0565% ( 37) 00:10:28.707 10843.229 - 10902.807: 94.3331% ( 37) 00:10:28.707 10902.807 - 10962.385: 94.6097% ( 37) 00:10:28.707 10962.385 - 11021.964: 94.9013% ( 39) 00:10:28.707 11021.964 - 11081.542: 95.1705% ( 36) 00:10:28.707 11081.542 - 11141.120: 95.4172% ( 33) 00:10:28.707 11141.120 - 11200.698: 95.6863% ( 36) 00:10:28.707 11200.698 - 11260.276: 95.8882% ( 27) 00:10:28.707 11260.276 - 11319.855: 96.0900% ( 27) 00:10:28.707 11319.855 - 11379.433: 96.2993% ( 28) 00:10:28.707 11379.433 - 11439.011: 96.4563% ( 21) 00:10:28.707 11439.011 - 11498.589: 96.5535% ( 13) 00:10:28.707 11498.589 - 11558.167: 96.6657% ( 15) 00:10:28.707 11558.167 - 11617.745: 96.7255% ( 8) 00:10:28.707 11617.745 - 11677.324: 96.7778% ( 7) 00:10:28.707 11677.324 - 11736.902: 96.8227% ( 6) 00:10:28.707 11736.902 - 11796.480: 96.8750% ( 7) 00:10:28.707 11796.480 - 11856.058: 96.9423% ( 9) 00:10:28.707 11856.058 - 11915.636: 97.0170% ( 10) 00:10:28.707 11915.636 - 11975.215: 97.0843% ( 9) 00:10:28.707 11975.215 - 12034.793: 97.1441% ( 8) 00:10:28.707 12034.793 - 12094.371: 97.2413% ( 13) 00:10:28.707 12094.371 - 12153.949: 97.3609% ( 16) 00:10:28.707 12153.949 - 12213.527: 97.4507% ( 12) 00:10:28.707 12213.527 - 12273.105: 97.5254% ( 10) 00:10:28.707 12273.105 - 12332.684: 97.5927% ( 9) 00:10:28.707 12332.684 - 12392.262: 97.6675% ( 10) 00:10:28.707 12392.262 - 12451.840: 97.7572% ( 12) 00:10:28.707 12451.840 - 12511.418: 97.8544% ( 13) 00:10:28.707 12511.418 - 12570.996: 97.9665% ( 15) 00:10:28.707 12570.996 - 12630.575: 98.0562% ( 12) 00:10:28.707 12630.575 - 12690.153: 98.1609% ( 14) 00:10:28.707 12690.153 - 12749.731: 98.2656% ( 14) 00:10:28.707 12749.731 - 12809.309: 98.3702% ( 14) 00:10:28.707 12809.309 - 12868.887: 98.4749% ( 14) 00:10:28.707 12868.887 - 12928.465: 98.5721% ( 13) 00:10:28.707 12928.465 - 12988.044: 98.6468% ( 10) 00:10:28.707 12988.044 - 13047.622: 98.7216% ( 10) 00:10:28.707 13047.622 - 13107.200: 98.8038% ( 11) 00:10:28.707 13107.200 - 13166.778: 98.8562% ( 7) 00:10:28.707 13166.778 - 13226.356: 98.8935% ( 5) 00:10:28.707 13226.356 - 13285.935: 98.9234% ( 4) 00:10:28.707 13285.935 - 13345.513: 98.9459% ( 3) 00:10:28.707 13345.513 - 13405.091: 98.9758% ( 4) 00:10:28.707 13405.091 - 13464.669: 98.9907% ( 2) 00:10:28.707 13464.669 - 13524.247: 99.0132% ( 3) 00:10:28.707 13524.247 - 13583.825: 99.0356% ( 3) 00:10:28.707 13583.825 - 13643.404: 99.0431% ( 1) 00:10:28.707 29908.247 - 30027.404: 99.0580% ( 2) 00:10:28.707 30027.404 - 30146.560: 99.0730% ( 2) 00:10:28.707 30146.560 - 30265.716: 99.1029% ( 4) 00:10:28.707 30265.716 - 30384.873: 99.1253% ( 3) 00:10:28.707 30384.873 - 30504.029: 99.1477% ( 3) 00:10:28.707 30504.029 - 30742.342: 99.1926% ( 6) 00:10:28.707 30742.342 - 30980.655: 99.2374% ( 6) 00:10:28.707 30980.655 - 31218.967: 99.2823% ( 6) 00:10:28.707 31218.967 - 31457.280: 99.3272% ( 6) 00:10:28.707 31457.280 - 31695.593: 99.3720% ( 6) 00:10:28.707 31695.593 - 31933.905: 99.4169% ( 6) 00:10:28.707 31933.905 - 32172.218: 99.4692% ( 7) 00:10:28.707 32172.218 - 32410.531: 99.5066% ( 5) 00:10:28.707 32410.531 - 32648.844: 99.5215% ( 2) 00:10:28.707 37415.098 - 37653.411: 99.5589% ( 5) 00:10:28.707 37653.411 - 37891.724: 99.6112% ( 7) 00:10:28.707 37891.724 - 38130.036: 99.6486% ( 5) 00:10:28.707 38130.036 - 38368.349: 99.7010% ( 7) 00:10:28.707 38368.349 - 38606.662: 99.7458% ( 6) 00:10:28.707 38606.662 - 38844.975: 99.7907% ( 6) 00:10:28.707 38844.975 - 39083.287: 99.8355% ( 6) 00:10:28.707 39083.287 - 39321.600: 99.8879% ( 7) 00:10:28.707 39321.600 - 39559.913: 99.9327% ( 6) 00:10:28.707 39559.913 - 39798.225: 99.9850% ( 7) 00:10:28.707 39798.225 - 40036.538: 100.0000% ( 2) 00:10:28.707 00:10:28.707 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:28.707 ============================================================================== 00:10:28.707 Range in us Cumulative IO count 00:10:28.707 7745.164 - 7804.742: 0.0150% ( 2) 00:10:28.707 7804.742 - 7864.320: 0.0449% ( 4) 00:10:28.707 7864.320 - 7923.898: 0.0897% ( 6) 00:10:28.707 7923.898 - 7983.476: 0.2766% ( 25) 00:10:28.707 7983.476 - 8043.055: 0.6803% ( 54) 00:10:28.707 8043.055 - 8102.633: 1.4130% ( 98) 00:10:28.707 8102.633 - 8162.211: 2.5120% ( 147) 00:10:28.707 8162.211 - 8221.789: 4.1642% ( 221) 00:10:28.707 8221.789 - 8281.367: 6.3472% ( 292) 00:10:28.707 8281.367 - 8340.945: 8.7993% ( 328) 00:10:28.707 8340.945 - 8400.524: 11.4907% ( 360) 00:10:28.707 8400.524 - 8460.102: 14.2344% ( 367) 00:10:28.707 8460.102 - 8519.680: 17.1202% ( 386) 00:10:28.707 8519.680 - 8579.258: 20.0508% ( 392) 00:10:28.707 8579.258 - 8638.836: 23.0637% ( 403) 00:10:28.707 8638.836 - 8698.415: 26.0242% ( 396) 00:10:28.707 8698.415 - 8757.993: 29.1343% ( 416) 00:10:28.707 8757.993 - 8817.571: 32.2667% ( 419) 00:10:28.707 8817.571 - 8877.149: 35.4217% ( 422) 00:10:28.707 8877.149 - 8936.727: 38.5093% ( 413) 00:10:28.707 8936.727 - 8996.305: 41.5595% ( 408) 00:10:28.707 8996.305 - 9055.884: 44.4453% ( 386) 00:10:28.707 9055.884 - 9115.462: 46.8675% ( 324) 00:10:28.707 9115.462 - 9175.040: 49.1477% ( 305) 00:10:28.707 9175.040 - 9234.618: 51.1065% ( 262) 00:10:28.707 9234.618 - 9294.196: 53.0428% ( 259) 00:10:28.707 9294.196 - 9353.775: 55.1660% ( 284) 00:10:28.707 9353.775 - 9413.353: 57.4836% ( 310) 00:10:28.707 9413.353 - 9472.931: 59.7638% ( 305) 00:10:28.707 9472.931 - 9532.509: 62.1636% ( 321) 00:10:28.707 9532.509 - 9592.087: 64.5858% ( 324) 00:10:28.707 9592.087 - 9651.665: 66.9707% ( 319) 00:10:28.707 9651.665 - 9711.244: 69.4079% ( 326) 00:10:28.707 9711.244 - 9770.822: 71.7255% ( 310) 00:10:28.707 9770.822 - 9830.400: 73.9683% ( 300) 00:10:28.707 9830.400 - 9889.978: 76.1812% ( 296) 00:10:28.707 9889.978 - 9949.556: 78.3119% ( 285) 00:10:28.707 9949.556 - 10009.135: 80.3005% ( 266) 00:10:28.707 10009.135 - 10068.713: 82.1471% ( 247) 00:10:28.707 10068.713 - 10128.291: 83.9414% ( 240) 00:10:28.707 10128.291 - 10187.869: 85.5936% ( 221) 00:10:28.707 10187.869 - 10247.447: 87.0888% ( 200) 00:10:28.707 10247.447 - 10307.025: 88.5242% ( 192) 00:10:28.707 10307.025 - 10366.604: 89.7802% ( 168) 00:10:28.707 10366.604 - 10426.182: 90.8717% ( 146) 00:10:28.707 10426.182 - 10485.760: 91.5595% ( 92) 00:10:28.707 10485.760 - 10545.338: 92.1426% ( 78) 00:10:28.707 10545.338 - 10604.916: 92.6286% ( 65) 00:10:28.707 10604.916 - 10664.495: 93.0697% ( 59) 00:10:28.707 10664.495 - 10724.073: 93.4285% ( 48) 00:10:28.707 10724.073 - 10783.651: 93.7276% ( 40) 00:10:28.707 10783.651 - 10843.229: 93.9593% ( 31) 00:10:28.707 10843.229 - 10902.807: 94.1986% ( 32) 00:10:28.707 10902.807 - 10962.385: 94.4453% ( 33) 00:10:28.707 10962.385 - 11021.964: 94.7069% ( 35) 00:10:28.707 11021.964 - 11081.542: 94.9910% ( 38) 00:10:28.707 11081.542 - 11141.120: 95.2153% ( 30) 00:10:28.707 11141.120 - 11200.698: 95.4396% ( 30) 00:10:28.708 11200.698 - 11260.276: 95.6714% ( 31) 00:10:28.708 11260.276 - 11319.855: 95.8956% ( 30) 00:10:28.708 11319.855 - 11379.433: 96.1199% ( 30) 00:10:28.708 11379.433 - 11439.011: 96.3741% ( 34) 00:10:28.708 11439.011 - 11498.589: 96.5311% ( 21) 00:10:28.708 11498.589 - 11558.167: 96.6507% ( 16) 00:10:28.708 11558.167 - 11617.745: 96.7404% ( 12) 00:10:28.708 11617.745 - 11677.324: 96.8152% ( 10) 00:10:28.708 11677.324 - 11736.902: 96.9124% ( 13) 00:10:28.708 11736.902 - 11796.480: 96.9946% ( 11) 00:10:28.708 11796.480 - 11856.058: 97.0769% ( 11) 00:10:28.708 11856.058 - 11915.636: 97.1367% ( 8) 00:10:28.708 11915.636 - 11975.215: 97.2039% ( 9) 00:10:28.708 11975.215 - 12034.793: 97.2712% ( 9) 00:10:28.708 12034.793 - 12094.371: 97.3609% ( 12) 00:10:28.708 12094.371 - 12153.949: 97.4432% ( 11) 00:10:28.708 12153.949 - 12213.527: 97.5329% ( 12) 00:10:28.708 12213.527 - 12273.105: 97.6002% ( 9) 00:10:28.708 12273.105 - 12332.684: 97.6600% ( 8) 00:10:28.708 12332.684 - 12392.262: 97.7123% ( 7) 00:10:28.708 12392.262 - 12451.840: 97.7721% ( 8) 00:10:28.708 12451.840 - 12511.418: 97.8170% ( 6) 00:10:28.708 12511.418 - 12570.996: 97.8768% ( 8) 00:10:28.708 12570.996 - 12630.575: 97.9291% ( 7) 00:10:28.708 12630.575 - 12690.153: 97.9740% ( 6) 00:10:28.708 12690.153 - 12749.731: 98.0413% ( 9) 00:10:28.708 12749.731 - 12809.309: 98.1011% ( 8) 00:10:28.708 12809.309 - 12868.887: 98.1833% ( 11) 00:10:28.708 12868.887 - 12928.465: 98.2506% ( 9) 00:10:28.708 12928.465 - 12988.044: 98.3403% ( 12) 00:10:28.708 12988.044 - 13047.622: 98.4076% ( 9) 00:10:28.708 13047.622 - 13107.200: 98.4898% ( 11) 00:10:28.708 13107.200 - 13166.778: 98.5571% ( 9) 00:10:28.708 13166.778 - 13226.356: 98.6319% ( 10) 00:10:28.708 13226.356 - 13285.935: 98.6693% ( 5) 00:10:28.708 13285.935 - 13345.513: 98.7216% ( 7) 00:10:28.708 13345.513 - 13405.091: 98.7739% ( 7) 00:10:28.708 13405.091 - 13464.669: 98.8188% ( 6) 00:10:28.708 13464.669 - 13524.247: 98.8636% ( 6) 00:10:28.708 13524.247 - 13583.825: 98.9160% ( 7) 00:10:28.708 13583.825 - 13643.404: 98.9608% ( 6) 00:10:28.708 13643.404 - 13702.982: 98.9907% ( 4) 00:10:28.708 13702.982 - 13762.560: 99.0132% ( 3) 00:10:28.708 13762.560 - 13822.138: 99.0431% ( 4) 00:10:28.708 27763.433 - 27882.589: 99.0580% ( 2) 00:10:28.708 27882.589 - 28001.745: 99.0804% ( 3) 00:10:28.708 28001.745 - 28120.902: 99.0954% ( 2) 00:10:28.708 28120.902 - 28240.058: 99.1178% ( 3) 00:10:28.708 28240.058 - 28359.215: 99.1403% ( 3) 00:10:28.708 28359.215 - 28478.371: 99.1702% ( 4) 00:10:28.708 28478.371 - 28597.527: 99.1851% ( 2) 00:10:28.708 28597.527 - 28716.684: 99.2150% ( 4) 00:10:28.708 28716.684 - 28835.840: 99.2374% ( 3) 00:10:28.708 28835.840 - 28954.996: 99.2599% ( 3) 00:10:28.708 28954.996 - 29074.153: 99.2823% ( 3) 00:10:28.708 29074.153 - 29193.309: 99.3122% ( 4) 00:10:28.708 29193.309 - 29312.465: 99.3346% ( 3) 00:10:28.708 29312.465 - 29431.622: 99.3571% ( 3) 00:10:28.708 29431.622 - 29550.778: 99.3720% ( 2) 00:10:28.708 29550.778 - 29669.935: 99.3944% ( 3) 00:10:28.708 29669.935 - 29789.091: 99.4169% ( 3) 00:10:28.708 29789.091 - 29908.247: 99.4468% ( 4) 00:10:28.708 29908.247 - 30027.404: 99.4617% ( 2) 00:10:28.708 30027.404 - 30146.560: 99.4842% ( 3) 00:10:28.708 30146.560 - 30265.716: 99.5141% ( 4) 00:10:28.708 30265.716 - 30384.873: 99.5215% ( 1) 00:10:28.708 35031.971 - 35270.284: 99.5290% ( 1) 00:10:28.708 35270.284 - 35508.596: 99.5664% ( 5) 00:10:28.708 35508.596 - 35746.909: 99.6187% ( 7) 00:10:28.708 35746.909 - 35985.222: 99.6636% ( 6) 00:10:28.708 35985.222 - 36223.535: 99.7159% ( 7) 00:10:28.708 36223.535 - 36461.847: 99.7682% ( 7) 00:10:28.708 36461.847 - 36700.160: 99.8131% ( 6) 00:10:28.708 36700.160 - 36938.473: 99.8654% ( 7) 00:10:28.708 36938.473 - 37176.785: 99.9103% ( 6) 00:10:28.708 37176.785 - 37415.098: 99.9626% ( 7) 00:10:28.708 37415.098 - 37653.411: 100.0000% ( 5) 00:10:28.708 00:10:28.708 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:28.708 ============================================================================== 00:10:28.708 Range in us Cumulative IO count 00:10:28.708 7864.320 - 7923.898: 0.0374% ( 5) 00:10:28.708 7923.898 - 7983.476: 0.2691% ( 31) 00:10:28.708 7983.476 - 8043.055: 0.6504% ( 51) 00:10:28.708 8043.055 - 8102.633: 1.3307% ( 91) 00:10:28.708 8102.633 - 8162.211: 2.3325% ( 134) 00:10:28.708 8162.211 - 8221.789: 3.9548% ( 217) 00:10:28.708 8221.789 - 8281.367: 6.0257% ( 277) 00:10:28.708 8281.367 - 8340.945: 8.4704% ( 327) 00:10:28.708 8340.945 - 8400.524: 11.1618% ( 360) 00:10:28.708 8400.524 - 8460.102: 13.9952% ( 379) 00:10:28.708 8460.102 - 8519.680: 16.9408% ( 394) 00:10:28.708 8519.680 - 8579.258: 19.8938% ( 395) 00:10:28.708 8579.258 - 8638.836: 22.8170% ( 391) 00:10:28.708 8638.836 - 8698.415: 25.9046% ( 413) 00:10:28.708 8698.415 - 8757.993: 29.0446% ( 420) 00:10:28.708 8757.993 - 8817.571: 32.2144% ( 424) 00:10:28.708 8817.571 - 8877.149: 35.3319% ( 417) 00:10:28.708 8877.149 - 8936.727: 38.3523% ( 404) 00:10:28.708 8936.727 - 8996.305: 41.2978% ( 394) 00:10:28.708 8996.305 - 9055.884: 44.1687% ( 384) 00:10:28.708 9055.884 - 9115.462: 46.7404% ( 344) 00:10:28.708 9115.462 - 9175.040: 48.9608% ( 297) 00:10:28.708 9175.040 - 9234.618: 50.9196% ( 262) 00:10:28.708 9234.618 - 9294.196: 52.8932% ( 264) 00:10:28.708 9294.196 - 9353.775: 55.0912% ( 294) 00:10:28.708 9353.775 - 9413.353: 57.3191% ( 298) 00:10:28.708 9413.353 - 9472.931: 59.6217% ( 308) 00:10:28.708 9472.931 - 9532.509: 62.0440% ( 324) 00:10:28.708 9532.509 - 9592.087: 64.3765% ( 312) 00:10:28.708 9592.087 - 9651.665: 66.7763% ( 321) 00:10:28.708 9651.665 - 9711.244: 69.1537% ( 318) 00:10:28.708 9711.244 - 9770.822: 71.5012% ( 314) 00:10:28.708 9770.822 - 9830.400: 73.8263% ( 311) 00:10:28.708 9830.400 - 9889.978: 76.1065% ( 305) 00:10:28.708 9889.978 - 9949.556: 78.3792% ( 304) 00:10:28.708 9949.556 - 10009.135: 80.4949% ( 283) 00:10:28.708 10009.135 - 10068.713: 82.4387% ( 260) 00:10:28.708 10068.713 - 10128.291: 84.2629% ( 244) 00:10:28.708 10128.291 - 10187.869: 85.9225% ( 222) 00:10:28.708 10187.869 - 10247.447: 87.3430% ( 190) 00:10:28.708 10247.447 - 10307.025: 88.6214% ( 171) 00:10:28.708 10307.025 - 10366.604: 89.7727% ( 154) 00:10:28.708 10366.604 - 10426.182: 90.7371% ( 129) 00:10:28.708 10426.182 - 10485.760: 91.5221% ( 105) 00:10:28.708 10485.760 - 10545.338: 92.0604% ( 72) 00:10:28.708 10545.338 - 10604.916: 92.5763% ( 69) 00:10:28.708 10604.916 - 10664.495: 93.0024% ( 57) 00:10:28.708 10664.495 - 10724.073: 93.3762% ( 50) 00:10:28.708 10724.073 - 10783.651: 93.7500% ( 50) 00:10:28.708 10783.651 - 10843.229: 94.0565% ( 41) 00:10:28.708 10843.229 - 10902.807: 94.3556% ( 40) 00:10:28.708 10902.807 - 10962.385: 94.5948% ( 32) 00:10:28.708 10962.385 - 11021.964: 94.8490% ( 34) 00:10:28.708 11021.964 - 11081.542: 95.0733% ( 30) 00:10:28.708 11081.542 - 11141.120: 95.2826% ( 28) 00:10:28.708 11141.120 - 11200.698: 95.4844% ( 27) 00:10:28.708 11200.698 - 11260.276: 95.7162% ( 31) 00:10:28.708 11260.276 - 11319.855: 95.9554% ( 32) 00:10:28.708 11319.855 - 11379.433: 96.1722% ( 29) 00:10:28.708 11379.433 - 11439.011: 96.3517% ( 24) 00:10:28.708 11439.011 - 11498.589: 96.5386% ( 25) 00:10:28.708 11498.589 - 11558.167: 96.6358% ( 13) 00:10:28.708 11558.167 - 11617.745: 96.7703% ( 18) 00:10:28.708 11617.745 - 11677.324: 96.8750% ( 14) 00:10:28.708 11677.324 - 11736.902: 97.0021% ( 17) 00:10:28.708 11736.902 - 11796.480: 97.1142% ( 15) 00:10:28.708 11796.480 - 11856.058: 97.2413% ( 17) 00:10:28.708 11856.058 - 11915.636: 97.3460% ( 14) 00:10:28.708 11915.636 - 11975.215: 97.4581% ( 15) 00:10:28.708 11975.215 - 12034.793: 97.5628% ( 14) 00:10:28.708 12034.793 - 12094.371: 97.6525% ( 12) 00:10:28.708 12094.371 - 12153.949: 97.7422% ( 12) 00:10:28.708 12153.949 - 12213.527: 97.8095% ( 9) 00:10:28.708 12213.527 - 12273.105: 97.8843% ( 10) 00:10:28.708 12273.105 - 12332.684: 97.9291% ( 6) 00:10:28.708 12332.684 - 12392.262: 97.9665% ( 5) 00:10:28.708 12392.262 - 12451.840: 97.9889% ( 3) 00:10:28.708 12451.840 - 12511.418: 98.0188% ( 4) 00:10:28.708 12511.418 - 12570.996: 98.0413% ( 3) 00:10:28.708 12570.996 - 12630.575: 98.0786% ( 5) 00:10:28.708 12630.575 - 12690.153: 98.1160% ( 5) 00:10:28.708 12690.153 - 12749.731: 98.1459% ( 4) 00:10:28.708 12749.731 - 12809.309: 98.1833% ( 5) 00:10:28.708 12809.309 - 12868.887: 98.2356% ( 7) 00:10:28.708 12868.887 - 12928.465: 98.2880% ( 7) 00:10:28.708 12928.465 - 12988.044: 98.3403% ( 7) 00:10:28.708 12988.044 - 13047.622: 98.3926% ( 7) 00:10:28.708 13047.622 - 13107.200: 98.4450% ( 7) 00:10:28.708 13107.200 - 13166.778: 98.4973% ( 7) 00:10:28.708 13166.778 - 13226.356: 98.5496% ( 7) 00:10:28.708 13226.356 - 13285.935: 98.5945% ( 6) 00:10:28.708 13285.935 - 13345.513: 98.6319% ( 5) 00:10:28.708 13345.513 - 13405.091: 98.6917% ( 8) 00:10:28.708 13405.091 - 13464.669: 98.7440% ( 7) 00:10:28.708 13464.669 - 13524.247: 98.7889% ( 6) 00:10:28.708 13524.247 - 13583.825: 98.8337% ( 6) 00:10:28.708 13583.825 - 13643.404: 98.8861% ( 7) 00:10:28.708 13643.404 - 13702.982: 98.9309% ( 6) 00:10:28.708 13702.982 - 13762.560: 98.9833% ( 7) 00:10:28.708 13762.560 - 13822.138: 99.0057% ( 3) 00:10:28.708 13822.138 - 13881.716: 99.0356% ( 4) 00:10:28.708 13881.716 - 13941.295: 99.0431% ( 1) 00:10:28.708 25261.149 - 25380.305: 99.0655% ( 3) 00:10:28.708 25380.305 - 25499.462: 99.0804% ( 2) 00:10:28.708 25499.462 - 25618.618: 99.1029% ( 3) 00:10:28.708 25618.618 - 25737.775: 99.1328% ( 4) 00:10:28.708 25737.775 - 25856.931: 99.1552% ( 3) 00:10:28.708 25856.931 - 25976.087: 99.1776% ( 3) 00:10:28.708 25976.087 - 26095.244: 99.1926% ( 2) 00:10:28.708 26095.244 - 26214.400: 99.2150% ( 3) 00:10:28.708 26214.400 - 26333.556: 99.2374% ( 3) 00:10:28.708 26333.556 - 26452.713: 99.2599% ( 3) 00:10:28.708 26452.713 - 26571.869: 99.2823% ( 3) 00:10:28.708 26571.869 - 26691.025: 99.3047% ( 3) 00:10:28.708 26691.025 - 26810.182: 99.3272% ( 3) 00:10:28.708 26810.182 - 26929.338: 99.3496% ( 3) 00:10:28.708 26929.338 - 27048.495: 99.3795% ( 4) 00:10:28.708 27048.495 - 27167.651: 99.4019% ( 3) 00:10:28.708 27167.651 - 27286.807: 99.4243% ( 3) 00:10:28.708 27286.807 - 27405.964: 99.4468% ( 3) 00:10:28.708 27405.964 - 27525.120: 99.4692% ( 3) 00:10:28.708 27525.120 - 27644.276: 99.4916% ( 3) 00:10:28.708 27644.276 - 27763.433: 99.5141% ( 3) 00:10:28.708 27763.433 - 27882.589: 99.5215% ( 1) 00:10:28.708 32410.531 - 32648.844: 99.5290% ( 1) 00:10:28.708 32648.844 - 32887.156: 99.5739% ( 6) 00:10:28.708 32887.156 - 33125.469: 99.6262% ( 7) 00:10:28.708 33125.469 - 33363.782: 99.6785% ( 7) 00:10:28.709 33363.782 - 33602.095: 99.7309% ( 7) 00:10:28.709 33602.095 - 33840.407: 99.7757% ( 6) 00:10:28.709 33840.407 - 34078.720: 99.8281% ( 7) 00:10:28.709 34078.720 - 34317.033: 99.8804% ( 7) 00:10:28.709 34317.033 - 34555.345: 99.9327% ( 7) 00:10:28.709 34555.345 - 34793.658: 99.9850% ( 7) 00:10:28.709 34793.658 - 35031.971: 100.0000% ( 2) 00:10:28.709 00:10:28.709 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:28.709 ============================================================================== 00:10:28.709 Range in us Cumulative IO count 00:10:28.709 7864.320 - 7923.898: 0.0673% ( 9) 00:10:28.709 7923.898 - 7983.476: 0.2467% ( 24) 00:10:28.709 7983.476 - 8043.055: 0.7327% ( 65) 00:10:28.709 8043.055 - 8102.633: 1.5251% ( 106) 00:10:28.709 8102.633 - 8162.211: 2.6690% ( 153) 00:10:28.709 8162.211 - 8221.789: 4.3436% ( 224) 00:10:28.709 8221.789 - 8281.367: 6.4444% ( 281) 00:10:28.709 8281.367 - 8340.945: 8.7769% ( 312) 00:10:28.709 8340.945 - 8400.524: 11.4384% ( 356) 00:10:28.709 8400.524 - 8460.102: 14.2644% ( 378) 00:10:28.709 8460.102 - 8519.680: 17.0529% ( 373) 00:10:28.709 8519.680 - 8579.258: 20.0583% ( 402) 00:10:28.709 8579.258 - 8638.836: 23.0861% ( 405) 00:10:28.709 8638.836 - 8698.415: 26.1364% ( 408) 00:10:28.709 8698.415 - 8757.993: 29.2539% ( 417) 00:10:28.709 8757.993 - 8817.571: 32.3490% ( 414) 00:10:28.709 8817.571 - 8877.149: 35.3917% ( 407) 00:10:28.709 8877.149 - 8936.727: 38.3597% ( 397) 00:10:28.709 8936.727 - 8996.305: 41.2455% ( 386) 00:10:28.709 8996.305 - 9055.884: 43.8771% ( 352) 00:10:28.709 9055.884 - 9115.462: 46.3816% ( 335) 00:10:28.709 9115.462 - 9175.040: 48.4749% ( 280) 00:10:28.709 9175.040 - 9234.618: 50.5383% ( 276) 00:10:28.709 9234.618 - 9294.196: 52.5194% ( 265) 00:10:28.709 9294.196 - 9353.775: 54.6352% ( 283) 00:10:28.709 9353.775 - 9413.353: 56.8331% ( 294) 00:10:28.709 9413.353 - 9472.931: 59.1582% ( 311) 00:10:28.709 9472.931 - 9532.509: 61.5655% ( 322) 00:10:28.709 9532.509 - 9592.087: 63.9055% ( 313) 00:10:28.709 9592.087 - 9651.665: 66.3876% ( 332) 00:10:28.709 9651.665 - 9711.244: 68.7949% ( 322) 00:10:28.709 9711.244 - 9770.822: 71.1947% ( 321) 00:10:28.709 9770.822 - 9830.400: 73.5422% ( 314) 00:10:28.709 9830.400 - 9889.978: 75.8448% ( 308) 00:10:28.709 9889.978 - 9949.556: 78.1026% ( 302) 00:10:28.709 9949.556 - 10009.135: 80.2407% ( 286) 00:10:28.709 10009.135 - 10068.713: 82.2144% ( 264) 00:10:28.709 10068.713 - 10128.291: 84.0535% ( 246) 00:10:28.709 10128.291 - 10187.869: 85.7057% ( 221) 00:10:28.709 10187.869 - 10247.447: 87.1860% ( 198) 00:10:28.709 10247.447 - 10307.025: 88.5392% ( 181) 00:10:28.709 10307.025 - 10366.604: 89.6606% ( 150) 00:10:28.709 10366.604 - 10426.182: 90.7147% ( 141) 00:10:28.709 10426.182 - 10485.760: 91.4548% ( 99) 00:10:28.709 10485.760 - 10545.338: 92.0230% ( 76) 00:10:28.709 10545.338 - 10604.916: 92.5015% ( 64) 00:10:28.709 10604.916 - 10664.495: 92.8977% ( 53) 00:10:28.709 10664.495 - 10724.073: 93.2491% ( 47) 00:10:28.709 10724.073 - 10783.651: 93.5855% ( 45) 00:10:28.709 10783.651 - 10843.229: 93.8696% ( 38) 00:10:28.709 10843.229 - 10902.807: 94.1911% ( 43) 00:10:28.709 10902.807 - 10962.385: 94.5051% ( 42) 00:10:28.709 10962.385 - 11021.964: 94.8191% ( 42) 00:10:28.709 11021.964 - 11081.542: 95.1106% ( 39) 00:10:28.709 11081.542 - 11141.120: 95.3499% ( 32) 00:10:28.709 11141.120 - 11200.698: 95.6265% ( 37) 00:10:28.709 11200.698 - 11260.276: 95.8732% ( 33) 00:10:28.709 11260.276 - 11319.855: 96.1199% ( 33) 00:10:28.709 11319.855 - 11379.433: 96.3218% ( 27) 00:10:28.709 11379.433 - 11439.011: 96.5161% ( 26) 00:10:28.709 11439.011 - 11498.589: 96.6956% ( 24) 00:10:28.709 11498.589 - 11558.167: 96.8526% ( 21) 00:10:28.709 11558.167 - 11617.745: 96.9871% ( 18) 00:10:28.709 11617.745 - 11677.324: 97.1367% ( 20) 00:10:28.709 11677.324 - 11736.902: 97.2787% ( 19) 00:10:28.709 11736.902 - 11796.480: 97.4208% ( 19) 00:10:28.709 11796.480 - 11856.058: 97.5254% ( 14) 00:10:28.709 11856.058 - 11915.636: 97.6226% ( 13) 00:10:28.709 11915.636 - 11975.215: 97.7123% ( 12) 00:10:28.709 11975.215 - 12034.793: 97.7796% ( 9) 00:10:28.709 12034.793 - 12094.371: 97.8319% ( 7) 00:10:28.709 12094.371 - 12153.949: 97.8843% ( 7) 00:10:28.709 12153.949 - 12213.527: 97.9291% ( 6) 00:10:28.709 12213.527 - 12273.105: 97.9815% ( 7) 00:10:28.709 12273.105 - 12332.684: 98.0039% ( 3) 00:10:28.709 12332.684 - 12392.262: 98.0338% ( 4) 00:10:28.709 12392.262 - 12451.840: 98.0562% ( 3) 00:10:28.709 12451.840 - 12511.418: 98.0712% ( 2) 00:10:28.709 12511.418 - 12570.996: 98.0936% ( 3) 00:10:28.709 12570.996 - 12630.575: 98.1160% ( 3) 00:10:28.709 12630.575 - 12690.153: 98.1385% ( 3) 00:10:28.709 12690.153 - 12749.731: 98.1609% ( 3) 00:10:28.709 12749.731 - 12809.309: 98.1908% ( 4) 00:10:28.709 12809.309 - 12868.887: 98.2282% ( 5) 00:10:28.709 12868.887 - 12928.465: 98.2805% ( 7) 00:10:28.709 12928.465 - 12988.044: 98.3254% ( 6) 00:10:28.709 12988.044 - 13047.622: 98.3852% ( 8) 00:10:28.709 13047.622 - 13107.200: 98.4450% ( 8) 00:10:28.709 13107.200 - 13166.778: 98.4898% ( 6) 00:10:28.709 13166.778 - 13226.356: 98.5496% ( 8) 00:10:28.709 13226.356 - 13285.935: 98.5945% ( 6) 00:10:28.709 13285.935 - 13345.513: 98.6468% ( 7) 00:10:28.709 13345.513 - 13405.091: 98.6992% ( 7) 00:10:28.709 13405.091 - 13464.669: 98.7515% ( 7) 00:10:28.709 13464.669 - 13524.247: 98.7964% ( 6) 00:10:28.709 13524.247 - 13583.825: 98.8562% ( 8) 00:10:28.709 13583.825 - 13643.404: 98.9085% ( 7) 00:10:28.709 13643.404 - 13702.982: 98.9384% ( 4) 00:10:28.709 13702.982 - 13762.560: 98.9683% ( 4) 00:10:28.709 13762.560 - 13822.138: 98.9907% ( 3) 00:10:28.709 13822.138 - 13881.716: 99.0206% ( 4) 00:10:28.709 13881.716 - 13941.295: 99.0431% ( 3) 00:10:28.709 22520.553 - 22639.709: 99.0505% ( 1) 00:10:28.709 22639.709 - 22758.865: 99.0655% ( 2) 00:10:28.709 22758.865 - 22878.022: 99.0879% ( 3) 00:10:28.709 22878.022 - 22997.178: 99.0954% ( 1) 00:10:28.709 22997.178 - 23116.335: 99.1178% ( 3) 00:10:28.709 23116.335 - 23235.491: 99.1403% ( 3) 00:10:28.709 23235.491 - 23354.647: 99.1627% ( 3) 00:10:28.709 23354.647 - 23473.804: 99.1851% ( 3) 00:10:28.709 23473.804 - 23592.960: 99.2150% ( 4) 00:10:28.709 23592.960 - 23712.116: 99.2374% ( 3) 00:10:28.709 23712.116 - 23831.273: 99.2599% ( 3) 00:10:28.709 23831.273 - 23950.429: 99.2823% ( 3) 00:10:28.709 23950.429 - 24069.585: 99.3047% ( 3) 00:10:28.709 24069.585 - 24188.742: 99.3272% ( 3) 00:10:28.709 24188.742 - 24307.898: 99.3496% ( 3) 00:10:28.709 24307.898 - 24427.055: 99.3720% ( 3) 00:10:28.709 24427.055 - 24546.211: 99.3944% ( 3) 00:10:28.709 24546.211 - 24665.367: 99.4169% ( 3) 00:10:28.709 24665.367 - 24784.524: 99.4468% ( 4) 00:10:28.709 24784.524 - 24903.680: 99.4692% ( 3) 00:10:28.709 24903.680 - 25022.836: 99.4916% ( 3) 00:10:28.709 25022.836 - 25141.993: 99.5141% ( 3) 00:10:28.709 25141.993 - 25261.149: 99.5215% ( 1) 00:10:28.709 29908.247 - 30027.404: 99.5440% ( 3) 00:10:28.709 30027.404 - 30146.560: 99.5664% ( 3) 00:10:28.709 30146.560 - 30265.716: 99.5888% ( 3) 00:10:28.709 30265.716 - 30384.873: 99.6112% ( 3) 00:10:28.709 30384.873 - 30504.029: 99.6337% ( 3) 00:10:28.709 30504.029 - 30742.342: 99.6860% ( 7) 00:10:28.709 30742.342 - 30980.655: 99.7309% ( 6) 00:10:28.709 30980.655 - 31218.967: 99.7832% ( 7) 00:10:28.709 31218.967 - 31457.280: 99.8355% ( 7) 00:10:28.709 31457.280 - 31695.593: 99.8804% ( 6) 00:10:28.709 31695.593 - 31933.905: 99.9327% ( 7) 00:10:28.709 31933.905 - 32172.218: 99.9850% ( 7) 00:10:28.709 32172.218 - 32410.531: 100.0000% ( 2) 00:10:28.709 00:10:28.709 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:28.709 ============================================================================== 00:10:28.709 Range in us Cumulative IO count 00:10:28.709 7864.320 - 7923.898: 0.0224% ( 3) 00:10:28.709 7923.898 - 7983.476: 0.1271% ( 14) 00:10:28.709 7983.476 - 8043.055: 0.5308% ( 54) 00:10:28.709 8043.055 - 8102.633: 1.3457% ( 109) 00:10:28.709 8102.633 - 8162.211: 2.5045% ( 155) 00:10:28.709 8162.211 - 8221.789: 4.1941% ( 226) 00:10:28.709 8221.789 - 8281.367: 6.2500% ( 275) 00:10:28.709 8281.367 - 8340.945: 8.6872% ( 326) 00:10:28.709 8340.945 - 8400.524: 11.4234% ( 366) 00:10:28.709 8400.524 - 8460.102: 14.1447% ( 364) 00:10:28.709 8460.102 - 8519.680: 16.9707% ( 378) 00:10:28.709 8519.680 - 8579.258: 19.9387% ( 397) 00:10:28.709 8579.258 - 8638.836: 22.9740% ( 406) 00:10:28.709 8638.836 - 8698.415: 26.0541% ( 412) 00:10:28.709 8698.415 - 8757.993: 29.2090% ( 422) 00:10:28.709 8757.993 - 8817.571: 32.3565% ( 421) 00:10:28.709 8817.571 - 8877.149: 35.5263% ( 424) 00:10:28.709 8877.149 - 8936.727: 38.5766% ( 408) 00:10:28.709 8936.727 - 8996.305: 41.4474% ( 384) 00:10:28.709 8996.305 - 9055.884: 44.0939% ( 354) 00:10:28.709 9055.884 - 9115.462: 46.5236% ( 325) 00:10:28.709 9115.462 - 9175.040: 48.7141% ( 293) 00:10:28.709 9175.040 - 9234.618: 50.7028% ( 266) 00:10:28.709 9234.618 - 9294.196: 52.5419% ( 246) 00:10:28.709 9294.196 - 9353.775: 54.5754% ( 272) 00:10:28.709 9353.775 - 9413.353: 56.7584% ( 292) 00:10:28.709 9413.353 - 9472.931: 59.1731% ( 323) 00:10:28.709 9472.931 - 9532.509: 61.5206% ( 314) 00:10:28.709 9532.509 - 9592.087: 63.9728% ( 328) 00:10:28.709 9592.087 - 9651.665: 66.4399% ( 330) 00:10:28.709 9651.665 - 9711.244: 68.7650% ( 311) 00:10:28.709 9711.244 - 9770.822: 71.1573% ( 320) 00:10:28.709 9770.822 - 9830.400: 73.4824% ( 311) 00:10:28.709 9830.400 - 9889.978: 75.7476% ( 303) 00:10:28.709 9889.978 - 9949.556: 77.9381% ( 293) 00:10:28.709 9949.556 - 10009.135: 79.9641% ( 271) 00:10:28.709 10009.135 - 10068.713: 81.9378% ( 264) 00:10:28.709 10068.713 - 10128.291: 83.8517% ( 256) 00:10:28.709 10128.291 - 10187.869: 85.5039% ( 221) 00:10:28.709 10187.869 - 10247.447: 87.0514% ( 207) 00:10:28.709 10247.447 - 10307.025: 88.3298% ( 171) 00:10:28.709 10307.025 - 10366.604: 89.4363% ( 148) 00:10:28.709 10366.604 - 10426.182: 90.4904% ( 141) 00:10:28.709 10426.182 - 10485.760: 91.3128% ( 110) 00:10:28.709 10485.760 - 10545.338: 91.9707% ( 88) 00:10:28.709 10545.338 - 10604.916: 92.4566% ( 65) 00:10:28.709 10604.916 - 10664.495: 92.8678% ( 55) 00:10:28.709 10664.495 - 10724.073: 93.1818% ( 42) 00:10:28.709 10724.073 - 10783.651: 93.4958% ( 42) 00:10:28.709 10783.651 - 10843.229: 93.7949% ( 40) 00:10:28.709 10843.229 - 10902.807: 94.1238% ( 44) 00:10:28.709 10902.807 - 10962.385: 94.4453% ( 43) 00:10:28.709 10962.385 - 11021.964: 94.7892% ( 46) 00:10:28.709 11021.964 - 11081.542: 95.1406% ( 47) 00:10:28.709 11081.542 - 11141.120: 95.4620% ( 43) 00:10:28.709 11141.120 - 11200.698: 95.7685% ( 41) 00:10:28.709 11200.698 - 11260.276: 96.0302% ( 35) 00:10:28.709 11260.276 - 11319.855: 96.2545% ( 30) 00:10:28.709 11319.855 - 11379.433: 96.4862% ( 31) 00:10:28.709 11379.433 - 11439.011: 96.6956% ( 28) 00:10:28.709 11439.011 - 11498.589: 96.8675% ( 23) 00:10:28.709 11498.589 - 11558.167: 97.0320% ( 22) 00:10:28.709 11558.167 - 11617.745: 97.1367% ( 14) 00:10:28.709 11617.745 - 11677.324: 97.2264% ( 12) 00:10:28.710 11677.324 - 11736.902: 97.3161% ( 12) 00:10:28.710 11736.902 - 11796.480: 97.4208% ( 14) 00:10:28.710 11796.480 - 11856.058: 97.5105% ( 12) 00:10:28.710 11856.058 - 11915.636: 97.6226% ( 15) 00:10:28.710 11915.636 - 11975.215: 97.7123% ( 12) 00:10:28.710 11975.215 - 12034.793: 97.7497% ( 5) 00:10:28.710 12034.793 - 12094.371: 97.7871% ( 5) 00:10:28.710 12094.371 - 12153.949: 97.8095% ( 3) 00:10:28.710 12153.949 - 12213.527: 97.8394% ( 4) 00:10:28.710 12213.527 - 12273.105: 97.8618% ( 3) 00:10:28.710 12273.105 - 12332.684: 97.8917% ( 4) 00:10:28.710 12332.684 - 12392.262: 97.9142% ( 3) 00:10:28.710 12392.262 - 12451.840: 97.9665% ( 7) 00:10:28.710 12451.840 - 12511.418: 98.0114% ( 6) 00:10:28.710 12511.418 - 12570.996: 98.0562% ( 6) 00:10:28.710 12570.996 - 12630.575: 98.1011% ( 6) 00:10:28.710 12630.575 - 12690.153: 98.1609% ( 8) 00:10:28.710 12690.153 - 12749.731: 98.2132% ( 7) 00:10:28.710 12749.731 - 12809.309: 98.2880% ( 10) 00:10:28.710 12809.309 - 12868.887: 98.3403% ( 7) 00:10:28.710 12868.887 - 12928.465: 98.3926% ( 7) 00:10:28.710 12928.465 - 12988.044: 98.4525% ( 8) 00:10:28.710 12988.044 - 13047.622: 98.4973% ( 6) 00:10:28.710 13047.622 - 13107.200: 98.5571% ( 8) 00:10:28.710 13107.200 - 13166.778: 98.6020% ( 6) 00:10:28.710 13166.778 - 13226.356: 98.6618% ( 8) 00:10:28.710 13226.356 - 13285.935: 98.7066% ( 6) 00:10:28.710 13285.935 - 13345.513: 98.7590% ( 7) 00:10:28.710 13345.513 - 13405.091: 98.8113% ( 7) 00:10:28.710 13405.091 - 13464.669: 98.8636% ( 7) 00:10:28.710 13464.669 - 13524.247: 98.8935% ( 4) 00:10:28.710 13524.247 - 13583.825: 98.9234% ( 4) 00:10:28.710 13583.825 - 13643.404: 98.9533% ( 4) 00:10:28.710 13643.404 - 13702.982: 98.9758% ( 3) 00:10:28.710 13702.982 - 13762.560: 99.0057% ( 4) 00:10:28.710 13762.560 - 13822.138: 99.0281% ( 3) 00:10:28.710 13822.138 - 13881.716: 99.0431% ( 2) 00:10:28.710 20018.269 - 20137.425: 99.0580% ( 2) 00:10:28.710 20137.425 - 20256.582: 99.0730% ( 2) 00:10:28.710 20256.582 - 20375.738: 99.0954% ( 3) 00:10:28.710 20375.738 - 20494.895: 99.1178% ( 3) 00:10:28.710 20494.895 - 20614.051: 99.1403% ( 3) 00:10:28.710 20614.051 - 20733.207: 99.1627% ( 3) 00:10:28.710 20733.207 - 20852.364: 99.1851% ( 3) 00:10:28.710 20852.364 - 20971.520: 99.2075% ( 3) 00:10:28.710 20971.520 - 21090.676: 99.2300% ( 3) 00:10:28.710 21090.676 - 21209.833: 99.2524% ( 3) 00:10:28.710 21209.833 - 21328.989: 99.2823% ( 4) 00:10:28.710 21328.989 - 21448.145: 99.2972% ( 2) 00:10:28.710 21448.145 - 21567.302: 99.3197% ( 3) 00:10:28.710 21567.302 - 21686.458: 99.3421% ( 3) 00:10:28.710 21686.458 - 21805.615: 99.3645% ( 3) 00:10:28.710 21805.615 - 21924.771: 99.3944% ( 4) 00:10:28.710 21924.771 - 22043.927: 99.4169% ( 3) 00:10:28.710 22043.927 - 22163.084: 99.4318% ( 2) 00:10:28.710 22163.084 - 22282.240: 99.4617% ( 4) 00:10:28.710 22282.240 - 22401.396: 99.4842% ( 3) 00:10:28.710 22401.396 - 22520.553: 99.5066% ( 3) 00:10:28.710 22520.553 - 22639.709: 99.5215% ( 2) 00:10:28.710 27167.651 - 27286.807: 99.5290% ( 1) 00:10:28.710 27286.807 - 27405.964: 99.5514% ( 3) 00:10:28.710 27405.964 - 27525.120: 99.5739% ( 3) 00:10:28.710 27525.120 - 27644.276: 99.5963% ( 3) 00:10:28.710 27644.276 - 27763.433: 99.6262% ( 4) 00:10:28.710 27763.433 - 27882.589: 99.6486% ( 3) 00:10:28.710 27882.589 - 28001.745: 99.6785% ( 4) 00:10:28.710 28001.745 - 28120.902: 99.7010% ( 3) 00:10:28.710 28120.902 - 28240.058: 99.7234% ( 3) 00:10:28.710 28240.058 - 28359.215: 99.7533% ( 4) 00:10:28.710 28359.215 - 28478.371: 99.7757% ( 3) 00:10:28.710 28478.371 - 28597.527: 99.7981% ( 3) 00:10:28.710 28597.527 - 28716.684: 99.8281% ( 4) 00:10:28.710 28716.684 - 28835.840: 99.8505% ( 3) 00:10:28.710 28835.840 - 28954.996: 99.8804% ( 4) 00:10:28.710 28954.996 - 29074.153: 99.9028% ( 3) 00:10:28.710 29074.153 - 29193.309: 99.9327% ( 4) 00:10:28.710 29193.309 - 29312.465: 99.9551% ( 3) 00:10:28.710 29312.465 - 29431.622: 99.9850% ( 4) 00:10:28.710 29431.622 - 29550.778: 100.0000% ( 2) 00:10:28.710 00:10:28.710 21:09:39 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:10:30.092 Initializing NVMe Controllers 00:10:30.092 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:30.092 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:30.092 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:30.092 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:30.092 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:30.092 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:30.092 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:30.092 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:30.092 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:30.092 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:30.092 Initialization complete. Launching workers. 00:10:30.092 ======================================================== 00:10:30.092 Latency(us) 00:10:30.092 Device Information : IOPS MiB/s Average min max 00:10:30.092 PCIE (0000:00:10.0) NSID 1 from core 0: 11318.44 132.64 11335.91 7807.49 38395.63 00:10:30.092 PCIE (0000:00:11.0) NSID 1 from core 0: 11318.44 132.64 11321.02 7966.01 36662.19 00:10:30.092 PCIE (0000:00:13.0) NSID 1 from core 0: 11318.44 132.64 11304.57 7844.71 35480.37 00:10:30.092 PCIE (0000:00:12.0) NSID 1 from core 0: 11318.44 132.64 11288.65 7990.52 33887.56 00:10:30.092 PCIE (0000:00:12.0) NSID 2 from core 0: 11318.44 132.64 11271.41 7984.55 32107.30 00:10:30.092 PCIE (0000:00:12.0) NSID 3 from core 0: 11318.44 132.64 11254.15 7911.67 30320.10 00:10:30.092 ======================================================== 00:10:30.092 Total : 67910.62 795.83 11295.95 7807.49 38395.63 00:10:30.092 00:10:30.092 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:30.092 ================================================================================= 00:10:30.092 1.00000% : 8162.211us 00:10:30.092 10.00000% : 8996.305us 00:10:30.092 25.00000% : 9592.087us 00:10:30.092 50.00000% : 10366.604us 00:10:30.092 75.00000% : 11260.276us 00:10:30.092 90.00000% : 13107.200us 00:10:30.092 95.00000% : 21805.615us 00:10:30.092 98.00000% : 24188.742us 00:10:30.092 99.00000% : 26691.025us 00:10:30.092 99.50000% : 36223.535us 00:10:30.092 99.90000% : 38130.036us 00:10:30.092 99.99000% : 38368.349us 00:10:30.092 99.99900% : 38606.662us 00:10:30.092 99.99990% : 38606.662us 00:10:30.092 99.99999% : 38606.662us 00:10:30.092 00:10:30.092 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:30.092 ================================================================================= 00:10:30.092 1.00000% : 8340.945us 00:10:30.092 10.00000% : 9055.884us 00:10:30.092 25.00000% : 9592.087us 00:10:30.092 50.00000% : 10426.182us 00:10:30.092 75.00000% : 11200.698us 00:10:30.092 90.00000% : 12868.887us 00:10:30.092 95.00000% : 22520.553us 00:10:30.092 98.00000% : 23473.804us 00:10:30.092 99.00000% : 26691.025us 00:10:30.092 99.50000% : 35031.971us 00:10:30.092 99.90000% : 36461.847us 00:10:30.092 99.99000% : 36700.160us 00:10:30.092 99.99900% : 36700.160us 00:10:30.092 99.99990% : 36700.160us 00:10:30.092 99.99999% : 36700.160us 00:10:30.092 00:10:30.092 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:30.092 ================================================================================= 00:10:30.092 1.00000% : 8281.367us 00:10:30.092 10.00000% : 8996.305us 00:10:30.092 25.00000% : 9592.087us 00:10:30.092 50.00000% : 10366.604us 00:10:30.092 75.00000% : 11200.698us 00:10:30.092 90.00000% : 12749.731us 00:10:30.092 95.00000% : 22401.396us 00:10:30.093 98.00000% : 23831.273us 00:10:30.093 99.00000% : 25856.931us 00:10:30.093 99.50000% : 33840.407us 00:10:30.093 99.90000% : 35270.284us 00:10:30.093 99.99000% : 35508.596us 00:10:30.093 99.99900% : 35508.596us 00:10:30.093 99.99990% : 35508.596us 00:10:30.093 99.99999% : 35508.596us 00:10:30.093 00:10:30.093 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:30.093 ================================================================================= 00:10:30.093 1.00000% : 8281.367us 00:10:30.093 10.00000% : 9055.884us 00:10:30.093 25.00000% : 9592.087us 00:10:30.093 50.00000% : 10426.182us 00:10:30.093 75.00000% : 11200.698us 00:10:30.093 90.00000% : 12690.153us 00:10:30.093 95.00000% : 22520.553us 00:10:30.093 98.00000% : 23712.116us 00:10:30.093 99.00000% : 24665.367us 00:10:30.093 99.50000% : 32172.218us 00:10:30.093 99.90000% : 33602.095us 00:10:30.093 99.99000% : 34078.720us 00:10:30.093 99.99900% : 34078.720us 00:10:30.093 99.99990% : 34078.720us 00:10:30.093 99.99999% : 34078.720us 00:10:30.093 00:10:30.093 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:30.093 ================================================================================= 00:10:30.093 1.00000% : 8340.945us 00:10:30.093 10.00000% : 9055.884us 00:10:30.093 25.00000% : 9532.509us 00:10:30.093 50.00000% : 10426.182us 00:10:30.093 75.00000% : 11200.698us 00:10:30.093 90.00000% : 12690.153us 00:10:30.093 95.00000% : 22520.553us 00:10:30.093 98.00000% : 23235.491us 00:10:30.093 99.00000% : 23950.429us 00:10:30.093 99.50000% : 30384.873us 00:10:30.093 99.90000% : 31933.905us 00:10:30.093 99.99000% : 32172.218us 00:10:30.093 99.99900% : 32172.218us 00:10:30.093 99.99990% : 32172.218us 00:10:30.093 99.99999% : 32172.218us 00:10:30.093 00:10:30.093 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:30.093 ================================================================================= 00:10:30.093 1.00000% : 8340.945us 00:10:30.093 10.00000% : 9055.884us 00:10:30.093 25.00000% : 9592.087us 00:10:30.093 50.00000% : 10426.182us 00:10:30.093 75.00000% : 11260.276us 00:10:30.093 90.00000% : 12809.309us 00:10:30.093 95.00000% : 22282.240us 00:10:30.093 98.00000% : 22997.178us 00:10:30.093 99.00000% : 23712.116us 00:10:30.093 99.50000% : 27405.964us 00:10:30.093 99.90000% : 30027.404us 00:10:30.093 99.99000% : 30384.873us 00:10:30.093 99.99900% : 30384.873us 00:10:30.093 99.99990% : 30384.873us 00:10:30.093 99.99999% : 30384.873us 00:10:30.093 00:10:30.093 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:30.093 ============================================================================== 00:10:30.093 Range in us Cumulative IO count 00:10:30.093 7804.742 - 7864.320: 0.0441% ( 5) 00:10:30.093 7864.320 - 7923.898: 0.0706% ( 3) 00:10:30.093 7923.898 - 7983.476: 0.1589% ( 10) 00:10:30.093 7983.476 - 8043.055: 0.5208% ( 41) 00:10:30.093 8043.055 - 8102.633: 0.7327% ( 24) 00:10:30.093 8102.633 - 8162.211: 1.0240% ( 33) 00:10:30.093 8162.211 - 8221.789: 1.3418% ( 36) 00:10:30.093 8221.789 - 8281.367: 1.5537% ( 24) 00:10:30.093 8281.367 - 8340.945: 1.9156% ( 41) 00:10:30.093 8340.945 - 8400.524: 2.2422% ( 37) 00:10:30.093 8400.524 - 8460.102: 2.6130% ( 42) 00:10:30.093 8460.102 - 8519.680: 3.0456% ( 49) 00:10:30.093 8519.680 - 8579.258: 3.6723% ( 71) 00:10:30.093 8579.258 - 8638.836: 4.4756% ( 91) 00:10:30.093 8638.836 - 8698.415: 5.2878% ( 92) 00:10:30.093 8698.415 - 8757.993: 6.2412% ( 108) 00:10:30.093 8757.993 - 8817.571: 7.4241% ( 134) 00:10:30.093 8817.571 - 8877.149: 8.5805% ( 131) 00:10:30.093 8877.149 - 8936.727: 9.9665% ( 157) 00:10:30.093 8936.727 - 8996.305: 11.2200% ( 142) 00:10:30.093 8996.305 - 9055.884: 12.4912% ( 144) 00:10:30.093 9055.884 - 9115.462: 13.7624% ( 144) 00:10:30.093 9115.462 - 9175.040: 15.0335% ( 144) 00:10:30.093 9175.040 - 9234.618: 16.4371% ( 159) 00:10:30.093 9234.618 - 9294.196: 17.9908% ( 176) 00:10:30.093 9294.196 - 9353.775: 19.5180% ( 173) 00:10:30.093 9353.775 - 9413.353: 21.1600% ( 186) 00:10:30.093 9413.353 - 9472.931: 23.1020% ( 220) 00:10:30.093 9472.931 - 9532.509: 24.9294% ( 207) 00:10:30.093 9532.509 - 9592.087: 26.6596% ( 196) 00:10:30.093 9592.087 - 9651.665: 28.3280% ( 189) 00:10:30.093 9651.665 - 9711.244: 30.0141% ( 191) 00:10:30.093 9711.244 - 9770.822: 31.8856% ( 212) 00:10:30.093 9770.822 - 9830.400: 33.6246% ( 197) 00:10:30.093 9830.400 - 9889.978: 35.4785% ( 210) 00:10:30.093 9889.978 - 9949.556: 37.1557% ( 190) 00:10:30.093 9949.556 - 10009.135: 38.9477% ( 203) 00:10:30.093 10009.135 - 10068.713: 40.8633% ( 217) 00:10:30.093 10068.713 - 10128.291: 42.6995% ( 208) 00:10:30.093 10128.291 - 10187.869: 44.6593% ( 222) 00:10:30.093 10187.869 - 10247.447: 46.6808% ( 229) 00:10:30.093 10247.447 - 10307.025: 48.6317% ( 221) 00:10:30.093 10307.025 - 10366.604: 50.7415% ( 239) 00:10:30.093 10366.604 - 10426.182: 52.6395% ( 215) 00:10:30.093 10426.182 - 10485.760: 54.2991% ( 188) 00:10:30.093 10485.760 - 10545.338: 56.1706% ( 212) 00:10:30.093 10545.338 - 10604.916: 58.0685% ( 215) 00:10:30.093 10604.916 - 10664.495: 59.6928% ( 184) 00:10:30.093 10664.495 - 10724.073: 61.4936% ( 204) 00:10:30.093 10724.073 - 10783.651: 63.2857% ( 203) 00:10:30.093 10783.651 - 10843.229: 64.9629% ( 190) 00:10:30.093 10843.229 - 10902.807: 66.5078% ( 175) 00:10:30.093 10902.807 - 10962.385: 68.1762% ( 189) 00:10:30.093 10962.385 - 11021.964: 69.5886% ( 160) 00:10:30.093 11021.964 - 11081.542: 71.2835% ( 192) 00:10:30.093 11081.542 - 11141.120: 72.8725% ( 180) 00:10:30.093 11141.120 - 11200.698: 74.2408% ( 155) 00:10:30.093 11200.698 - 11260.276: 75.7945% ( 176) 00:10:30.093 11260.276 - 11319.855: 77.1893% ( 158) 00:10:30.093 11319.855 - 11379.433: 78.6458% ( 165) 00:10:30.093 11379.433 - 11439.011: 79.8817% ( 140) 00:10:30.093 11439.011 - 11498.589: 81.0117% ( 128) 00:10:30.093 11498.589 - 11558.167: 81.8503% ( 95) 00:10:30.093 11558.167 - 11617.745: 82.6889% ( 95) 00:10:30.093 11617.745 - 11677.324: 83.5099% ( 93) 00:10:30.093 11677.324 - 11736.902: 84.2073% ( 79) 00:10:30.093 11736.902 - 11796.480: 84.7634% ( 63) 00:10:30.093 11796.480 - 11856.058: 85.3196% ( 63) 00:10:30.093 11856.058 - 11915.636: 85.8139% ( 56) 00:10:30.093 11915.636 - 11975.215: 86.2906% ( 54) 00:10:30.093 11975.215 - 12034.793: 86.8468% ( 63) 00:10:30.093 12034.793 - 12094.371: 87.2352% ( 44) 00:10:30.093 12094.371 - 12153.949: 87.5618% ( 37) 00:10:30.093 12153.949 - 12213.527: 87.8001% ( 27) 00:10:30.093 12213.527 - 12273.105: 88.1356% ( 38) 00:10:30.093 12273.105 - 12332.684: 88.3563% ( 25) 00:10:30.093 12332.684 - 12392.262: 88.5417% ( 21) 00:10:30.093 12392.262 - 12451.840: 88.7447% ( 23) 00:10:30.093 12451.840 - 12511.418: 88.9654% ( 25) 00:10:30.093 12511.418 - 12570.996: 89.0978% ( 15) 00:10:30.093 12570.996 - 12630.575: 89.2391% ( 16) 00:10:30.093 12630.575 - 12690.153: 89.4068% ( 19) 00:10:30.093 12690.153 - 12749.731: 89.5039% ( 11) 00:10:30.093 12749.731 - 12809.309: 89.6010% ( 11) 00:10:30.093 12809.309 - 12868.887: 89.7069% ( 12) 00:10:30.093 12868.887 - 12928.465: 89.7952% ( 10) 00:10:30.093 12928.465 - 12988.044: 89.8923% ( 11) 00:10:30.093 12988.044 - 13047.622: 89.9364% ( 5) 00:10:30.093 13047.622 - 13107.200: 90.0247% ( 10) 00:10:30.093 13107.200 - 13166.778: 90.1218% ( 11) 00:10:30.093 13166.778 - 13226.356: 90.2013% ( 9) 00:10:30.093 13226.356 - 13285.935: 90.2631% ( 7) 00:10:30.093 13285.935 - 13345.513: 90.3425% ( 9) 00:10:30.093 13345.513 - 13405.091: 90.4308% ( 10) 00:10:30.093 13405.091 - 13464.669: 90.5544% ( 14) 00:10:30.093 13464.669 - 13524.247: 90.6691% ( 13) 00:10:30.093 13524.247 - 13583.825: 90.7221% ( 6) 00:10:30.094 13583.825 - 13643.404: 90.8016% ( 9) 00:10:30.094 13643.404 - 13702.982: 90.8633% ( 7) 00:10:30.094 13702.982 - 13762.560: 90.9075% ( 5) 00:10:30.094 13762.560 - 13822.138: 90.9605% ( 6) 00:10:30.094 13822.138 - 13881.716: 91.0311% ( 8) 00:10:30.094 13881.716 - 13941.295: 91.0752% ( 5) 00:10:30.094 13941.295 - 14000.873: 91.1105% ( 4) 00:10:30.094 14000.873 - 14060.451: 91.1723% ( 7) 00:10:30.094 14060.451 - 14120.029: 91.1900% ( 2) 00:10:30.094 14120.029 - 14179.607: 91.2076% ( 2) 00:10:30.094 14179.607 - 14239.185: 91.2165% ( 1) 00:10:30.094 14239.185 - 14298.764: 91.2429% ( 3) 00:10:30.094 14298.764 - 14358.342: 91.2518% ( 1) 00:10:30.094 14358.342 - 14417.920: 91.2694% ( 2) 00:10:30.094 14417.920 - 14477.498: 91.2959% ( 3) 00:10:30.094 14477.498 - 14537.076: 91.3224% ( 3) 00:10:30.094 14537.076 - 14596.655: 91.4107% ( 10) 00:10:30.094 14596.655 - 14656.233: 91.4636% ( 6) 00:10:30.094 14656.233 - 14715.811: 91.5607% ( 11) 00:10:30.094 14715.811 - 14775.389: 91.6049% ( 5) 00:10:30.094 14775.389 - 14834.967: 91.6490% ( 5) 00:10:30.094 14834.967 - 14894.545: 91.6843% ( 4) 00:10:30.094 14894.545 - 14954.124: 91.7285% ( 5) 00:10:30.094 14954.124 - 15013.702: 91.7549% ( 3) 00:10:30.094 15013.702 - 15073.280: 91.8167% ( 7) 00:10:30.094 15073.280 - 15132.858: 91.8520% ( 4) 00:10:30.094 15132.858 - 15192.436: 91.8874% ( 4) 00:10:30.094 15192.436 - 15252.015: 91.9227% ( 4) 00:10:30.094 15252.015 - 15371.171: 91.9756% ( 6) 00:10:30.094 15371.171 - 15490.327: 92.0021% ( 3) 00:10:30.094 15490.327 - 15609.484: 92.0463% ( 5) 00:10:30.094 15609.484 - 15728.640: 92.0992% ( 6) 00:10:30.094 15728.640 - 15847.796: 92.1345% ( 4) 00:10:30.094 15847.796 - 15966.953: 92.2052% ( 8) 00:10:30.094 15966.953 - 16086.109: 92.2316% ( 3) 00:10:30.094 16086.109 - 16205.265: 92.2934% ( 7) 00:10:30.094 16205.265 - 16324.422: 92.3376% ( 5) 00:10:30.094 16324.422 - 16443.578: 92.3817% ( 5) 00:10:30.094 16443.578 - 16562.735: 92.4347% ( 6) 00:10:30.094 16562.735 - 16681.891: 92.4788% ( 5) 00:10:30.094 16681.891 - 16801.047: 92.5406% ( 7) 00:10:30.094 16801.047 - 16920.204: 92.5936% ( 6) 00:10:30.094 16920.204 - 17039.360: 92.6201% ( 3) 00:10:30.094 17039.360 - 17158.516: 92.6465% ( 3) 00:10:30.094 17158.516 - 17277.673: 92.6554% ( 1) 00:10:30.094 19184.175 - 19303.331: 92.6642% ( 1) 00:10:30.094 19303.331 - 19422.487: 92.6730% ( 1) 00:10:30.094 19422.487 - 19541.644: 92.6995% ( 3) 00:10:30.094 19541.644 - 19660.800: 92.7348% ( 4) 00:10:30.094 19660.800 - 19779.956: 92.7878% ( 6) 00:10:30.094 19779.956 - 19899.113: 92.8319% ( 5) 00:10:30.094 19899.113 - 20018.269: 92.8407% ( 1) 00:10:30.094 20018.269 - 20137.425: 92.8584% ( 2) 00:10:30.094 20137.425 - 20256.582: 92.8849% ( 3) 00:10:30.094 20256.582 - 20375.738: 92.8937% ( 1) 00:10:30.094 20375.738 - 20494.895: 92.9202% ( 3) 00:10:30.094 20494.895 - 20614.051: 92.9555% ( 4) 00:10:30.094 20614.051 - 20733.207: 93.0261% ( 8) 00:10:30.094 20733.207 - 20852.364: 93.1056% ( 9) 00:10:30.094 20852.364 - 20971.520: 93.2556% ( 17) 00:10:30.094 20971.520 - 21090.676: 93.3439% ( 10) 00:10:30.094 21090.676 - 21209.833: 93.4145% ( 8) 00:10:30.094 21209.833 - 21328.989: 93.5558% ( 16) 00:10:30.094 21328.989 - 21448.145: 94.0060% ( 51) 00:10:30.094 21448.145 - 21567.302: 94.5621% ( 63) 00:10:30.094 21567.302 - 21686.458: 94.9064% ( 39) 00:10:30.094 21686.458 - 21805.615: 95.1448% ( 27) 00:10:30.094 21805.615 - 21924.771: 95.3655% ( 25) 00:10:30.094 21924.771 - 22043.927: 95.5597% ( 22) 00:10:30.094 22043.927 - 22163.084: 95.7274% ( 19) 00:10:30.094 22163.084 - 22282.240: 95.9128% ( 21) 00:10:30.094 22282.240 - 22401.396: 96.1070% ( 22) 00:10:30.094 22401.396 - 22520.553: 96.2129% ( 12) 00:10:30.094 22520.553 - 22639.709: 96.3718% ( 18) 00:10:30.094 22639.709 - 22758.865: 96.4954% ( 14) 00:10:30.094 22758.865 - 22878.022: 96.5395% ( 5) 00:10:30.094 22878.022 - 22997.178: 96.6278% ( 10) 00:10:30.094 22997.178 - 23116.335: 96.7249% ( 11) 00:10:30.094 23116.335 - 23235.491: 96.8927% ( 19) 00:10:30.094 23235.491 - 23354.647: 97.0251% ( 15) 00:10:30.094 23354.647 - 23473.804: 97.2016% ( 20) 00:10:30.094 23473.804 - 23592.960: 97.3517% ( 17) 00:10:30.094 23592.960 - 23712.116: 97.5106% ( 18) 00:10:30.094 23712.116 - 23831.273: 97.6518% ( 16) 00:10:30.094 23831.273 - 23950.429: 97.7931% ( 16) 00:10:30.094 23950.429 - 24069.585: 97.9431% ( 17) 00:10:30.094 24069.585 - 24188.742: 98.1020% ( 18) 00:10:30.094 24188.742 - 24307.898: 98.2256% ( 14) 00:10:30.094 24307.898 - 24427.055: 98.3669% ( 16) 00:10:30.094 24427.055 - 24546.211: 98.5258% ( 18) 00:10:30.094 24546.211 - 24665.367: 98.6229% ( 11) 00:10:30.094 24665.367 - 24784.524: 98.7730% ( 17) 00:10:30.094 24784.524 - 24903.680: 98.8347% ( 7) 00:10:30.094 24903.680 - 25022.836: 98.8524% ( 2) 00:10:30.094 25022.836 - 25141.993: 98.8701% ( 2) 00:10:30.094 26214.400 - 26333.556: 98.9230% ( 6) 00:10:30.094 26333.556 - 26452.713: 98.9495% ( 3) 00:10:30.094 26452.713 - 26571.869: 98.9760% ( 3) 00:10:30.094 26571.869 - 26691.025: 99.0025% ( 3) 00:10:30.094 26691.025 - 26810.182: 99.0290% ( 3) 00:10:30.094 26810.182 - 26929.338: 99.0554% ( 3) 00:10:30.094 26929.338 - 27048.495: 99.0907% ( 4) 00:10:30.094 27048.495 - 27167.651: 99.1084% ( 2) 00:10:30.094 27167.651 - 27286.807: 99.1437% ( 4) 00:10:30.094 27286.807 - 27405.964: 99.1702% ( 3) 00:10:30.094 27405.964 - 27525.120: 99.1967% ( 3) 00:10:30.094 27525.120 - 27644.276: 99.2320% ( 4) 00:10:30.094 27644.276 - 27763.433: 99.2585% ( 3) 00:10:30.094 27763.433 - 27882.589: 99.2850% ( 3) 00:10:30.094 27882.589 - 28001.745: 99.3026% ( 2) 00:10:30.094 28001.745 - 28120.902: 99.3379% ( 4) 00:10:30.094 28120.902 - 28240.058: 99.3644% ( 3) 00:10:30.094 28240.058 - 28359.215: 99.3997% ( 4) 00:10:30.094 28359.215 - 28478.371: 99.4262% ( 3) 00:10:30.094 28478.371 - 28597.527: 99.4350% ( 1) 00:10:30.094 35746.909 - 35985.222: 99.4615% ( 3) 00:10:30.094 35985.222 - 36223.535: 99.5145% ( 6) 00:10:30.094 36223.535 - 36461.847: 99.5763% ( 7) 00:10:30.094 36461.847 - 36700.160: 99.6292% ( 6) 00:10:30.094 36700.160 - 36938.473: 99.6822% ( 6) 00:10:30.094 36938.473 - 37176.785: 99.7440% ( 7) 00:10:30.094 37176.785 - 37415.098: 99.7970% ( 6) 00:10:30.094 37415.098 - 37653.411: 99.8499% ( 6) 00:10:30.094 37653.411 - 37891.724: 99.8941% ( 5) 00:10:30.094 37891.724 - 38130.036: 99.9382% ( 5) 00:10:30.094 38130.036 - 38368.349: 99.9912% ( 6) 00:10:30.094 38368.349 - 38606.662: 100.0000% ( 1) 00:10:30.094 00:10:30.094 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:30.094 ============================================================================== 00:10:30.094 Range in us Cumulative IO count 00:10:30.094 7923.898 - 7983.476: 0.0088% ( 1) 00:10:30.094 7983.476 - 8043.055: 0.0177% ( 1) 00:10:30.094 8043.055 - 8102.633: 0.0530% ( 4) 00:10:30.094 8102.633 - 8162.211: 0.2383% ( 21) 00:10:30.094 8162.211 - 8221.789: 0.4767% ( 27) 00:10:30.094 8221.789 - 8281.367: 0.7857% ( 35) 00:10:30.094 8281.367 - 8340.945: 1.2888% ( 57) 00:10:30.094 8340.945 - 8400.524: 1.7832% ( 56) 00:10:30.094 8400.524 - 8460.102: 2.2334% ( 51) 00:10:30.094 8460.102 - 8519.680: 2.7101% ( 54) 00:10:30.094 8519.680 - 8579.258: 3.1691% ( 52) 00:10:30.094 8579.258 - 8638.836: 3.6988% ( 60) 00:10:30.094 8638.836 - 8698.415: 4.3256% ( 71) 00:10:30.094 8698.415 - 8757.993: 5.1554% ( 94) 00:10:30.094 8757.993 - 8817.571: 6.2323% ( 122) 00:10:30.094 8817.571 - 8877.149: 7.2828% ( 119) 00:10:30.094 8877.149 - 8936.727: 8.5805% ( 147) 00:10:30.095 8936.727 - 8996.305: 9.8429% ( 143) 00:10:30.095 8996.305 - 9055.884: 11.2288% ( 157) 00:10:30.095 9055.884 - 9115.462: 12.5265% ( 147) 00:10:30.095 9115.462 - 9175.040: 14.1331% ( 182) 00:10:30.095 9175.040 - 9234.618: 15.6603% ( 173) 00:10:30.095 9234.618 - 9294.196: 17.1610% ( 170) 00:10:30.095 9294.196 - 9353.775: 18.7765% ( 183) 00:10:30.095 9353.775 - 9413.353: 20.5244% ( 198) 00:10:30.095 9413.353 - 9472.931: 22.2899% ( 200) 00:10:30.095 9472.931 - 9532.509: 24.1084% ( 206) 00:10:30.095 9532.509 - 9592.087: 26.0505% ( 220) 00:10:30.095 9592.087 - 9651.665: 28.0367% ( 225) 00:10:30.095 9651.665 - 9711.244: 30.0759% ( 231) 00:10:30.095 9711.244 - 9770.822: 32.0621% ( 225) 00:10:30.095 9770.822 - 9830.400: 33.9778% ( 217) 00:10:30.095 9830.400 - 9889.978: 35.7168% ( 197) 00:10:30.095 9889.978 - 9949.556: 37.4382% ( 195) 00:10:30.095 9949.556 - 10009.135: 39.2391% ( 204) 00:10:30.095 10009.135 - 10068.713: 40.9958% ( 199) 00:10:30.095 10068.713 - 10128.291: 42.6730% ( 190) 00:10:30.095 10128.291 - 10187.869: 44.2002% ( 173) 00:10:30.095 10187.869 - 10247.447: 45.8157% ( 183) 00:10:30.095 10247.447 - 10307.025: 47.6165% ( 204) 00:10:30.095 10307.025 - 10366.604: 49.7175% ( 238) 00:10:30.095 10366.604 - 10426.182: 51.9951% ( 258) 00:10:30.095 10426.182 - 10485.760: 54.1490% ( 244) 00:10:30.095 10485.760 - 10545.338: 56.3559% ( 250) 00:10:30.095 10545.338 - 10604.916: 58.5099% ( 244) 00:10:30.095 10604.916 - 10664.495: 60.5491% ( 231) 00:10:30.095 10664.495 - 10724.073: 62.5000% ( 221) 00:10:30.095 10724.073 - 10783.651: 64.4597% ( 222) 00:10:30.095 10783.651 - 10843.229: 66.3047% ( 209) 00:10:30.095 10843.229 - 10902.807: 67.9732% ( 189) 00:10:30.095 10902.807 - 10962.385: 69.5798% ( 182) 00:10:30.095 10962.385 - 11021.964: 71.3189% ( 197) 00:10:30.095 11021.964 - 11081.542: 72.9343% ( 183) 00:10:30.095 11081.542 - 11141.120: 74.3644% ( 162) 00:10:30.095 11141.120 - 11200.698: 75.6356% ( 144) 00:10:30.095 11200.698 - 11260.276: 76.9686% ( 151) 00:10:30.095 11260.276 - 11319.855: 78.1691% ( 136) 00:10:30.095 11319.855 - 11379.433: 79.3609% ( 135) 00:10:30.095 11379.433 - 11439.011: 80.4290% ( 121) 00:10:30.095 11439.011 - 11498.589: 81.4089% ( 111) 00:10:30.095 11498.589 - 11558.167: 82.3623% ( 108) 00:10:30.095 11558.167 - 11617.745: 83.2274% ( 98) 00:10:30.095 11617.745 - 11677.324: 83.9513% ( 82) 00:10:30.095 11677.324 - 11736.902: 84.6310% ( 77) 00:10:30.095 11736.902 - 11796.480: 85.2666% ( 72) 00:10:30.095 11796.480 - 11856.058: 85.7521% ( 55) 00:10:30.095 11856.058 - 11915.636: 86.2376% ( 55) 00:10:30.095 11915.636 - 11975.215: 86.6261% ( 44) 00:10:30.095 11975.215 - 12034.793: 86.9968% ( 42) 00:10:30.095 12034.793 - 12094.371: 87.3323% ( 38) 00:10:30.095 12094.371 - 12153.949: 87.6148% ( 32) 00:10:30.095 12153.949 - 12213.527: 87.9767% ( 41) 00:10:30.095 12213.527 - 12273.105: 88.2592% ( 32) 00:10:30.095 12273.105 - 12332.684: 88.4710% ( 24) 00:10:30.095 12332.684 - 12392.262: 88.6829% ( 24) 00:10:30.095 12392.262 - 12451.840: 88.8595% ( 20) 00:10:30.095 12451.840 - 12511.418: 89.0713% ( 24) 00:10:30.095 12511.418 - 12570.996: 89.2744% ( 23) 00:10:30.095 12570.996 - 12630.575: 89.4244% ( 17) 00:10:30.095 12630.575 - 12690.153: 89.5833% ( 18) 00:10:30.095 12690.153 - 12749.731: 89.7334% ( 17) 00:10:30.095 12749.731 - 12809.309: 89.8835% ( 17) 00:10:30.095 12809.309 - 12868.887: 90.0777% ( 22) 00:10:30.095 12868.887 - 12928.465: 90.2631% ( 21) 00:10:30.095 12928.465 - 12988.044: 90.4484% ( 21) 00:10:30.095 12988.044 - 13047.622: 90.5985% ( 17) 00:10:30.095 13047.622 - 13107.200: 90.7309% ( 15) 00:10:30.095 13107.200 - 13166.778: 90.8280% ( 11) 00:10:30.095 13166.778 - 13226.356: 90.9251% ( 11) 00:10:30.095 13226.356 - 13285.935: 91.0046% ( 9) 00:10:30.095 13285.935 - 13345.513: 91.0840% ( 9) 00:10:30.095 13345.513 - 13405.091: 91.1370% ( 6) 00:10:30.095 13405.091 - 13464.669: 91.2076% ( 8) 00:10:30.095 13464.669 - 13524.247: 91.2782% ( 8) 00:10:30.095 13524.247 - 13583.825: 91.3489% ( 8) 00:10:30.095 13583.825 - 13643.404: 91.4018% ( 6) 00:10:30.095 13643.404 - 13702.982: 91.4901% ( 10) 00:10:30.095 13702.982 - 13762.560: 91.5607% ( 8) 00:10:30.095 13762.560 - 13822.138: 91.6225% ( 7) 00:10:30.095 13822.138 - 13881.716: 91.6402% ( 2) 00:10:30.095 13881.716 - 13941.295: 91.6578% ( 2) 00:10:30.095 13941.295 - 14000.873: 91.6843% ( 3) 00:10:30.095 14000.873 - 14060.451: 91.7108% ( 3) 00:10:30.095 14060.451 - 14120.029: 91.8167% ( 12) 00:10:30.095 14120.029 - 14179.607: 91.8432% ( 3) 00:10:30.095 14179.607 - 14239.185: 91.8609% ( 2) 00:10:30.095 14239.185 - 14298.764: 91.8785% ( 2) 00:10:30.095 14298.764 - 14358.342: 91.8874% ( 1) 00:10:30.095 14358.342 - 14417.920: 91.9050% ( 2) 00:10:30.095 14417.920 - 14477.498: 91.9227% ( 2) 00:10:30.095 14477.498 - 14537.076: 91.9315% ( 1) 00:10:30.095 14537.076 - 14596.655: 91.9492% ( 2) 00:10:30.095 14596.655 - 14656.233: 91.9668% ( 2) 00:10:30.095 14656.233 - 14715.811: 91.9756% ( 1) 00:10:30.095 14715.811 - 14775.389: 91.9933% ( 2) 00:10:30.095 14775.389 - 14834.967: 92.0021% ( 1) 00:10:30.095 14834.967 - 14894.545: 92.0198% ( 2) 00:10:30.095 14894.545 - 14954.124: 92.0374% ( 2) 00:10:30.095 14954.124 - 15013.702: 92.0463% ( 1) 00:10:30.095 15013.702 - 15073.280: 92.0639% ( 2) 00:10:30.095 15073.280 - 15132.858: 92.0727% ( 1) 00:10:30.095 15132.858 - 15192.436: 92.0816% ( 1) 00:10:30.095 15192.436 - 15252.015: 92.0904% ( 1) 00:10:30.095 16086.109 - 16205.265: 92.0992% ( 1) 00:10:30.095 16205.265 - 16324.422: 92.1345% ( 4) 00:10:30.095 16324.422 - 16443.578: 92.1787% ( 5) 00:10:30.095 16443.578 - 16562.735: 92.2052% ( 3) 00:10:30.095 16562.735 - 16681.891: 92.2493% ( 5) 00:10:30.095 16681.891 - 16801.047: 92.2758% ( 3) 00:10:30.095 16801.047 - 16920.204: 92.3199% ( 5) 00:10:30.095 16920.204 - 17039.360: 92.3552% ( 4) 00:10:30.095 17039.360 - 17158.516: 92.3905% ( 4) 00:10:30.095 17158.516 - 17277.673: 92.4258% ( 4) 00:10:30.095 17277.673 - 17396.829: 92.4612% ( 4) 00:10:30.095 17396.829 - 17515.985: 92.4965% ( 4) 00:10:30.095 17515.985 - 17635.142: 92.5230% ( 3) 00:10:30.095 17635.142 - 17754.298: 92.5583% ( 4) 00:10:30.095 17754.298 - 17873.455: 92.5759% ( 2) 00:10:30.095 17873.455 - 17992.611: 92.5936% ( 2) 00:10:30.095 17992.611 - 18111.767: 92.6112% ( 2) 00:10:30.095 18111.767 - 18230.924: 92.6377% ( 3) 00:10:30.095 18230.924 - 18350.080: 92.6554% ( 2) 00:10:30.095 19065.018 - 19184.175: 92.6907% ( 4) 00:10:30.095 19184.175 - 19303.331: 92.7613% ( 8) 00:10:30.095 19303.331 - 19422.487: 92.8407% ( 9) 00:10:30.095 19422.487 - 19541.644: 92.9643% ( 14) 00:10:30.095 19541.644 - 19660.800: 92.9908% ( 3) 00:10:30.095 19660.800 - 19779.956: 93.0173% ( 3) 00:10:30.095 19779.956 - 19899.113: 93.0438% ( 3) 00:10:30.095 19899.113 - 20018.269: 93.0703% ( 3) 00:10:30.095 20018.269 - 20137.425: 93.0968% ( 3) 00:10:30.095 20137.425 - 20256.582: 93.1232% ( 3) 00:10:30.095 20256.582 - 20375.738: 93.1497% ( 3) 00:10:30.095 20375.738 - 20494.895: 93.1762% ( 3) 00:10:30.095 20494.895 - 20614.051: 93.2027% ( 3) 00:10:30.095 20614.051 - 20733.207: 93.2203% ( 2) 00:10:30.095 21090.676 - 21209.833: 93.2292% ( 1) 00:10:30.095 21448.145 - 21567.302: 93.2380% ( 1) 00:10:30.095 21686.458 - 21805.615: 93.3528% ( 13) 00:10:30.095 21805.615 - 21924.771: 93.5734% ( 25) 00:10:30.095 21924.771 - 22043.927: 93.8471% ( 31) 00:10:30.096 22043.927 - 22163.084: 94.1119% ( 30) 00:10:30.096 22163.084 - 22282.240: 94.4827% ( 42) 00:10:30.096 22282.240 - 22401.396: 94.7917% ( 35) 00:10:30.096 22401.396 - 22520.553: 95.1801% ( 44) 00:10:30.096 22520.553 - 22639.709: 96.0099% ( 94) 00:10:30.096 22639.709 - 22758.865: 96.7602% ( 85) 00:10:30.096 22758.865 - 22878.022: 97.1045% ( 39) 00:10:30.096 22878.022 - 22997.178: 97.3164% ( 24) 00:10:30.096 22997.178 - 23116.335: 97.5018% ( 21) 00:10:30.096 23116.335 - 23235.491: 97.6783% ( 20) 00:10:30.096 23235.491 - 23354.647: 97.8902% ( 24) 00:10:30.096 23354.647 - 23473.804: 98.0491% ( 18) 00:10:30.096 23473.804 - 23592.960: 98.1903% ( 16) 00:10:30.096 23592.960 - 23712.116: 98.3316% ( 16) 00:10:30.096 23712.116 - 23831.273: 98.4728% ( 16) 00:10:30.096 23831.273 - 23950.429: 98.5876% ( 13) 00:10:30.096 23950.429 - 24069.585: 98.6935% ( 12) 00:10:30.096 24069.585 - 24188.742: 98.7553% ( 7) 00:10:30.096 24188.742 - 24307.898: 98.8171% ( 7) 00:10:30.096 24307.898 - 24427.055: 98.8612% ( 5) 00:10:30.096 24427.055 - 24546.211: 98.8701% ( 1) 00:10:30.096 26095.244 - 26214.400: 98.9054% ( 4) 00:10:30.096 26214.400 - 26333.556: 98.9319% ( 3) 00:10:30.096 26333.556 - 26452.713: 98.9672% ( 4) 00:10:30.096 26452.713 - 26571.869: 98.9936% ( 3) 00:10:30.096 26571.869 - 26691.025: 99.0201% ( 3) 00:10:30.096 26691.025 - 26810.182: 99.0554% ( 4) 00:10:30.096 26810.182 - 26929.338: 99.0819% ( 3) 00:10:30.096 26929.338 - 27048.495: 99.1172% ( 4) 00:10:30.096 27048.495 - 27167.651: 99.1349% ( 2) 00:10:30.096 27167.651 - 27286.807: 99.1702% ( 4) 00:10:30.096 27286.807 - 27405.964: 99.1967% ( 3) 00:10:30.096 27405.964 - 27525.120: 99.2232% ( 3) 00:10:30.096 27525.120 - 27644.276: 99.2496% ( 3) 00:10:30.096 27644.276 - 27763.433: 99.2761% ( 3) 00:10:30.096 27763.433 - 27882.589: 99.3114% ( 4) 00:10:30.096 27882.589 - 28001.745: 99.3379% ( 3) 00:10:30.096 28001.745 - 28120.902: 99.3644% ( 3) 00:10:30.096 28120.902 - 28240.058: 99.3997% ( 4) 00:10:30.096 28240.058 - 28359.215: 99.4262% ( 3) 00:10:30.096 28359.215 - 28478.371: 99.4350% ( 1) 00:10:30.096 34555.345 - 34793.658: 99.4615% ( 3) 00:10:30.096 34793.658 - 35031.971: 99.5145% ( 6) 00:10:30.096 35031.971 - 35270.284: 99.5763% ( 7) 00:10:30.096 35270.284 - 35508.596: 99.6381% ( 7) 00:10:30.096 35508.596 - 35746.909: 99.7087% ( 8) 00:10:30.096 35746.909 - 35985.222: 99.7793% ( 8) 00:10:30.096 35985.222 - 36223.535: 99.8588% ( 9) 00:10:30.096 36223.535 - 36461.847: 99.9294% ( 8) 00:10:30.096 36461.847 - 36700.160: 100.0000% ( 8) 00:10:30.096 00:10:30.096 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:30.096 ============================================================================== 00:10:30.096 Range in us Cumulative IO count 00:10:30.096 7804.742 - 7864.320: 0.0088% ( 1) 00:10:30.096 7864.320 - 7923.898: 0.0177% ( 1) 00:10:30.096 7983.476 - 8043.055: 0.0530% ( 4) 00:10:30.096 8043.055 - 8102.633: 0.1412% ( 10) 00:10:30.096 8102.633 - 8162.211: 0.3531% ( 24) 00:10:30.096 8162.211 - 8221.789: 0.6709% ( 36) 00:10:30.096 8221.789 - 8281.367: 1.0064% ( 38) 00:10:30.096 8281.367 - 8340.945: 1.4213% ( 47) 00:10:30.096 8340.945 - 8400.524: 1.8715% ( 51) 00:10:30.096 8400.524 - 8460.102: 2.3217% ( 51) 00:10:30.096 8460.102 - 8519.680: 2.8602% ( 61) 00:10:30.096 8519.680 - 8579.258: 3.4163% ( 63) 00:10:30.096 8579.258 - 8638.836: 3.9371% ( 59) 00:10:30.096 8638.836 - 8698.415: 4.5551% ( 70) 00:10:30.096 8698.415 - 8757.993: 5.2701% ( 81) 00:10:30.096 8757.993 - 8817.571: 6.0999% ( 94) 00:10:30.096 8817.571 - 8877.149: 7.1504% ( 119) 00:10:30.096 8877.149 - 8936.727: 8.4834% ( 151) 00:10:30.096 8936.727 - 8996.305: 10.0636% ( 179) 00:10:30.096 8996.305 - 9055.884: 11.6437% ( 179) 00:10:30.096 9055.884 - 9115.462: 13.2680% ( 184) 00:10:30.096 9115.462 - 9175.040: 14.8129% ( 175) 00:10:30.096 9175.040 - 9234.618: 16.1988% ( 157) 00:10:30.096 9234.618 - 9294.196: 17.7172% ( 172) 00:10:30.096 9294.196 - 9353.775: 19.6063% ( 214) 00:10:30.096 9353.775 - 9413.353: 21.4513% ( 209) 00:10:30.096 9413.353 - 9472.931: 22.9873% ( 174) 00:10:30.096 9472.931 - 9532.509: 24.5410% ( 176) 00:10:30.096 9532.509 - 9592.087: 26.5007% ( 222) 00:10:30.096 9592.087 - 9651.665: 28.3280% ( 207) 00:10:30.096 9651.665 - 9711.244: 30.2878% ( 222) 00:10:30.096 9711.244 - 9770.822: 32.2299% ( 220) 00:10:30.096 9770.822 - 9830.400: 34.1984% ( 223) 00:10:30.096 9830.400 - 9889.978: 36.1847% ( 225) 00:10:30.096 9889.978 - 9949.556: 38.0561% ( 212) 00:10:30.096 9949.556 - 10009.135: 39.7775% ( 195) 00:10:30.096 10009.135 - 10068.713: 41.4195% ( 186) 00:10:30.096 10068.713 - 10128.291: 43.0614% ( 186) 00:10:30.096 10128.291 - 10187.869: 44.7652% ( 193) 00:10:30.096 10187.869 - 10247.447: 46.4689% ( 193) 00:10:30.096 10247.447 - 10307.025: 48.4728% ( 227) 00:10:30.096 10307.025 - 10366.604: 50.4149% ( 220) 00:10:30.096 10366.604 - 10426.182: 52.4718% ( 233) 00:10:30.096 10426.182 - 10485.760: 54.3609% ( 214) 00:10:30.096 10485.760 - 10545.338: 56.3383% ( 224) 00:10:30.096 10545.338 - 10604.916: 58.2009% ( 211) 00:10:30.096 10604.916 - 10664.495: 60.1607% ( 222) 00:10:30.096 10664.495 - 10724.073: 62.2617% ( 238) 00:10:30.096 10724.073 - 10783.651: 64.2567% ( 226) 00:10:30.096 10783.651 - 10843.229: 66.3047% ( 232) 00:10:30.096 10843.229 - 10902.807: 68.0614% ( 199) 00:10:30.096 10902.807 - 10962.385: 69.6593% ( 181) 00:10:30.096 10962.385 - 11021.964: 71.2659% ( 182) 00:10:30.096 11021.964 - 11081.542: 72.6960% ( 162) 00:10:30.096 11081.542 - 11141.120: 74.2496% ( 176) 00:10:30.096 11141.120 - 11200.698: 75.6003% ( 153) 00:10:30.096 11200.698 - 11260.276: 76.8715% ( 144) 00:10:30.096 11260.276 - 11319.855: 77.9926% ( 127) 00:10:30.096 11319.855 - 11379.433: 79.0166% ( 116) 00:10:30.096 11379.433 - 11439.011: 79.9612% ( 107) 00:10:30.096 11439.011 - 11498.589: 80.8528% ( 101) 00:10:30.096 11498.589 - 11558.167: 81.6737% ( 93) 00:10:30.096 11558.167 - 11617.745: 82.6183% ( 107) 00:10:30.096 11617.745 - 11677.324: 83.3775% ( 86) 00:10:30.096 11677.324 - 11736.902: 84.0484% ( 76) 00:10:30.096 11736.902 - 11796.480: 84.6133% ( 64) 00:10:30.096 11796.480 - 11856.058: 85.2136% ( 68) 00:10:30.096 11856.058 - 11915.636: 85.7080% ( 56) 00:10:30.096 11915.636 - 11975.215: 86.2023% ( 56) 00:10:30.096 11975.215 - 12034.793: 86.6967% ( 56) 00:10:30.096 12034.793 - 12094.371: 87.1999% ( 57) 00:10:30.096 12094.371 - 12153.949: 87.5618% ( 41) 00:10:30.096 12153.949 - 12213.527: 87.8796% ( 36) 00:10:30.096 12213.527 - 12273.105: 88.1532% ( 31) 00:10:30.096 12273.105 - 12332.684: 88.4269% ( 31) 00:10:30.096 12332.684 - 12392.262: 88.6653% ( 27) 00:10:30.096 12392.262 - 12451.840: 88.9654% ( 34) 00:10:30.096 12451.840 - 12511.418: 89.2302% ( 30) 00:10:30.096 12511.418 - 12570.996: 89.4774% ( 28) 00:10:30.096 12570.996 - 12630.575: 89.6716% ( 22) 00:10:30.096 12630.575 - 12690.153: 89.8835% ( 24) 00:10:30.096 12690.153 - 12749.731: 90.0865% ( 23) 00:10:30.096 12749.731 - 12809.309: 90.2454% ( 18) 00:10:30.096 12809.309 - 12868.887: 90.3867% ( 16) 00:10:30.096 12868.887 - 12928.465: 90.5367% ( 17) 00:10:30.096 12928.465 - 12988.044: 90.6956% ( 18) 00:10:30.096 12988.044 - 13047.622: 90.8633% ( 19) 00:10:30.096 13047.622 - 13107.200: 90.9869% ( 14) 00:10:30.096 13107.200 - 13166.778: 91.1017% ( 13) 00:10:30.096 13166.778 - 13226.356: 91.2429% ( 16) 00:10:30.096 13226.356 - 13285.935: 91.4195% ( 20) 00:10:30.096 13285.935 - 13345.513: 91.4901% ( 8) 00:10:30.096 13345.513 - 13405.091: 91.5431% ( 6) 00:10:30.096 13405.091 - 13464.669: 91.6049% ( 7) 00:10:30.096 13464.669 - 13524.247: 91.6755% ( 8) 00:10:30.097 13524.247 - 13583.825: 91.7196% ( 5) 00:10:30.097 13583.825 - 13643.404: 91.7638% ( 5) 00:10:30.097 13643.404 - 13702.982: 91.7991% ( 4) 00:10:30.097 13702.982 - 13762.560: 91.8432% ( 5) 00:10:30.097 13762.560 - 13822.138: 91.8697% ( 3) 00:10:30.097 13822.138 - 13881.716: 91.9050% ( 4) 00:10:30.097 13881.716 - 13941.295: 91.9138% ( 1) 00:10:30.097 13941.295 - 14000.873: 91.9315% ( 2) 00:10:30.097 14000.873 - 14060.451: 91.9403% ( 1) 00:10:30.097 14060.451 - 14120.029: 91.9580% ( 2) 00:10:30.097 14120.029 - 14179.607: 91.9756% ( 2) 00:10:30.097 14179.607 - 14239.185: 91.9845% ( 1) 00:10:30.097 14239.185 - 14298.764: 92.0021% ( 2) 00:10:30.097 14298.764 - 14358.342: 92.0198% ( 2) 00:10:30.097 14358.342 - 14417.920: 92.0286% ( 1) 00:10:30.097 14417.920 - 14477.498: 92.0463% ( 2) 00:10:30.097 14477.498 - 14537.076: 92.0551% ( 1) 00:10:30.097 14537.076 - 14596.655: 92.0727% ( 2) 00:10:30.097 14596.655 - 14656.233: 92.0816% ( 1) 00:10:30.097 14656.233 - 14715.811: 92.0904% ( 1) 00:10:30.097 16443.578 - 16562.735: 92.0992% ( 1) 00:10:30.097 16562.735 - 16681.891: 92.1169% ( 2) 00:10:30.097 16681.891 - 16801.047: 92.1522% ( 4) 00:10:30.097 16801.047 - 16920.204: 92.1875% ( 4) 00:10:30.097 16920.204 - 17039.360: 92.2228% ( 4) 00:10:30.097 17039.360 - 17158.516: 92.2669% ( 5) 00:10:30.097 17158.516 - 17277.673: 92.3023% ( 4) 00:10:30.097 17277.673 - 17396.829: 92.3464% ( 5) 00:10:30.097 17396.829 - 17515.985: 92.3641% ( 2) 00:10:30.097 17515.985 - 17635.142: 92.3817% ( 2) 00:10:30.097 17635.142 - 17754.298: 92.3994% ( 2) 00:10:30.097 17754.298 - 17873.455: 92.4258% ( 3) 00:10:30.097 17873.455 - 17992.611: 92.4435% ( 2) 00:10:30.097 17992.611 - 18111.767: 92.4612% ( 2) 00:10:30.097 18111.767 - 18230.924: 92.4876% ( 3) 00:10:30.097 18230.924 - 18350.080: 92.5141% ( 3) 00:10:30.097 18350.080 - 18469.236: 92.5406% ( 3) 00:10:30.097 18469.236 - 18588.393: 92.5671% ( 3) 00:10:30.097 18588.393 - 18707.549: 92.5936% ( 3) 00:10:30.097 18707.549 - 18826.705: 92.6201% ( 3) 00:10:30.097 18826.705 - 18945.862: 92.6465% ( 3) 00:10:30.097 18945.862 - 19065.018: 92.6730% ( 3) 00:10:30.097 19065.018 - 19184.175: 92.7348% ( 7) 00:10:30.097 19184.175 - 19303.331: 92.8672% ( 15) 00:10:30.097 19303.331 - 19422.487: 92.9202% ( 6) 00:10:30.097 19422.487 - 19541.644: 92.9379% ( 2) 00:10:30.097 19541.644 - 19660.800: 92.9732% ( 4) 00:10:30.097 19660.800 - 19779.956: 92.9996% ( 3) 00:10:30.097 19779.956 - 19899.113: 93.0261% ( 3) 00:10:30.097 19899.113 - 20018.269: 93.0614% ( 4) 00:10:30.097 20018.269 - 20137.425: 93.0879% ( 3) 00:10:30.097 20137.425 - 20256.582: 93.1144% ( 3) 00:10:30.097 20256.582 - 20375.738: 93.1497% ( 4) 00:10:30.097 20375.738 - 20494.895: 93.1762% ( 3) 00:10:30.097 20494.895 - 20614.051: 93.2027% ( 3) 00:10:30.097 20614.051 - 20733.207: 93.2203% ( 2) 00:10:30.097 20971.520 - 21090.676: 93.2292% ( 1) 00:10:30.097 21090.676 - 21209.833: 93.2380% ( 1) 00:10:30.097 21328.989 - 21448.145: 93.2910% ( 6) 00:10:30.097 21448.145 - 21567.302: 93.3351% ( 5) 00:10:30.097 21567.302 - 21686.458: 93.5205% ( 21) 00:10:30.097 21686.458 - 21805.615: 93.7500% ( 26) 00:10:30.097 21805.615 - 21924.771: 94.0678% ( 36) 00:10:30.097 21924.771 - 22043.927: 94.3415% ( 31) 00:10:30.097 22043.927 - 22163.084: 94.6769% ( 38) 00:10:30.097 22163.084 - 22282.240: 94.9859% ( 35) 00:10:30.097 22282.240 - 22401.396: 95.2066% ( 25) 00:10:30.097 22401.396 - 22520.553: 95.4537% ( 28) 00:10:30.097 22520.553 - 22639.709: 95.7362% ( 32) 00:10:30.097 22639.709 - 22758.865: 96.4071% ( 76) 00:10:30.097 22758.865 - 22878.022: 96.7691% ( 41) 00:10:30.097 22878.022 - 22997.178: 96.9721% ( 23) 00:10:30.097 22997.178 - 23116.335: 97.1398% ( 19) 00:10:30.097 23116.335 - 23235.491: 97.2987% ( 18) 00:10:30.097 23235.491 - 23354.647: 97.4753% ( 20) 00:10:30.097 23354.647 - 23473.804: 97.6254% ( 17) 00:10:30.097 23473.804 - 23592.960: 97.8019% ( 20) 00:10:30.097 23592.960 - 23712.116: 97.9431% ( 16) 00:10:30.097 23712.116 - 23831.273: 98.1197% ( 20) 00:10:30.097 23831.273 - 23950.429: 98.2786% ( 18) 00:10:30.097 23950.429 - 24069.585: 98.4375% ( 18) 00:10:30.097 24069.585 - 24188.742: 98.5523% ( 13) 00:10:30.097 24188.742 - 24307.898: 98.6582% ( 12) 00:10:30.097 24307.898 - 24427.055: 98.7288% ( 8) 00:10:30.097 24427.055 - 24546.211: 98.7818% ( 6) 00:10:30.097 24546.211 - 24665.367: 98.8171% ( 4) 00:10:30.097 24665.367 - 24784.524: 98.8436% ( 3) 00:10:30.097 24784.524 - 24903.680: 98.8612% ( 2) 00:10:30.097 24903.680 - 25022.836: 98.8701% ( 1) 00:10:30.097 25261.149 - 25380.305: 98.8877% ( 2) 00:10:30.097 25380.305 - 25499.462: 98.9142% ( 3) 00:10:30.097 25499.462 - 25618.618: 98.9495% ( 4) 00:10:30.097 25618.618 - 25737.775: 98.9760% ( 3) 00:10:30.097 25737.775 - 25856.931: 99.0025% ( 3) 00:10:30.097 25856.931 - 25976.087: 99.0378% ( 4) 00:10:30.097 25976.087 - 26095.244: 99.0731% ( 4) 00:10:30.097 26095.244 - 26214.400: 99.1084% ( 4) 00:10:30.097 26214.400 - 26333.556: 99.1349% ( 3) 00:10:30.097 26333.556 - 26452.713: 99.1702% ( 4) 00:10:30.097 26452.713 - 26571.869: 99.2055% ( 4) 00:10:30.097 26571.869 - 26691.025: 99.2320% ( 3) 00:10:30.097 26691.025 - 26810.182: 99.2673% ( 4) 00:10:30.097 26810.182 - 26929.338: 99.2938% ( 3) 00:10:30.097 26929.338 - 27048.495: 99.3291% ( 4) 00:10:30.097 27048.495 - 27167.651: 99.3556% ( 3) 00:10:30.097 27167.651 - 27286.807: 99.3909% ( 4) 00:10:30.097 27286.807 - 27405.964: 99.4174% ( 3) 00:10:30.097 27405.964 - 27525.120: 99.4350% ( 2) 00:10:30.097 33363.782 - 33602.095: 99.4439% ( 1) 00:10:30.097 33602.095 - 33840.407: 99.5233% ( 9) 00:10:30.097 33840.407 - 34078.720: 99.5851% ( 7) 00:10:30.097 34078.720 - 34317.033: 99.6557% ( 8) 00:10:30.097 34317.033 - 34555.345: 99.7263% ( 8) 00:10:30.097 34555.345 - 34793.658: 99.8058% ( 9) 00:10:30.097 34793.658 - 35031.971: 99.8588% ( 6) 00:10:30.097 35031.971 - 35270.284: 99.9382% ( 9) 00:10:30.097 35270.284 - 35508.596: 100.0000% ( 7) 00:10:30.097 00:10:30.097 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:30.097 ============================================================================== 00:10:30.097 Range in us Cumulative IO count 00:10:30.097 7983.476 - 8043.055: 0.0088% ( 1) 00:10:30.097 8043.055 - 8102.633: 0.1059% ( 11) 00:10:30.097 8102.633 - 8162.211: 0.3972% ( 33) 00:10:30.097 8162.211 - 8221.789: 0.7062% ( 35) 00:10:30.097 8221.789 - 8281.367: 1.1123% ( 46) 00:10:30.097 8281.367 - 8340.945: 1.5095% ( 45) 00:10:30.097 8340.945 - 8400.524: 1.9156% ( 46) 00:10:30.097 8400.524 - 8460.102: 2.2246% ( 35) 00:10:30.097 8460.102 - 8519.680: 2.5865% ( 41) 00:10:30.097 8519.680 - 8579.258: 3.0014% ( 47) 00:10:30.097 8579.258 - 8638.836: 3.4605% ( 52) 00:10:30.097 8638.836 - 8698.415: 4.0254% ( 64) 00:10:30.097 8698.415 - 8757.993: 4.8376% ( 92) 00:10:30.097 8757.993 - 8817.571: 5.7998% ( 109) 00:10:30.097 8817.571 - 8877.149: 6.8944% ( 124) 00:10:30.097 8877.149 - 8936.727: 8.1480% ( 142) 00:10:30.097 8936.727 - 8996.305: 9.3838% ( 140) 00:10:30.097 8996.305 - 9055.884: 10.7521% ( 155) 00:10:30.097 9055.884 - 9115.462: 12.3146% ( 177) 00:10:30.097 9115.462 - 9175.040: 13.8065% ( 169) 00:10:30.097 9175.040 - 9234.618: 15.4131% ( 182) 00:10:30.097 9234.618 - 9294.196: 17.1434% ( 196) 00:10:30.097 9294.196 - 9353.775: 18.9795% ( 208) 00:10:30.097 9353.775 - 9413.353: 21.0452% ( 234) 00:10:30.097 9413.353 - 9472.931: 22.9873% ( 220) 00:10:30.098 9472.931 - 9532.509: 24.6999% ( 194) 00:10:30.098 9532.509 - 9592.087: 26.7126% ( 228) 00:10:30.098 9592.087 - 9651.665: 28.6635% ( 221) 00:10:30.098 9651.665 - 9711.244: 30.5879% ( 218) 00:10:30.098 9711.244 - 9770.822: 32.8213% ( 253) 00:10:30.098 9770.822 - 9830.400: 34.8958% ( 235) 00:10:30.098 9830.400 - 9889.978: 36.6702% ( 201) 00:10:30.098 9889.978 - 9949.556: 38.3033% ( 185) 00:10:30.098 9949.556 - 10009.135: 40.0071% ( 193) 00:10:30.098 10009.135 - 10068.713: 41.5696% ( 177) 00:10:30.098 10068.713 - 10128.291: 42.8937% ( 150) 00:10:30.098 10128.291 - 10187.869: 44.3679% ( 167) 00:10:30.098 10187.869 - 10247.447: 45.9569% ( 180) 00:10:30.098 10247.447 - 10307.025: 47.7401% ( 202) 00:10:30.098 10307.025 - 10366.604: 49.6028% ( 211) 00:10:30.098 10366.604 - 10426.182: 51.6861% ( 236) 00:10:30.098 10426.182 - 10485.760: 53.8400% ( 244) 00:10:30.098 10485.760 - 10545.338: 55.9234% ( 236) 00:10:30.098 10545.338 - 10604.916: 58.0597% ( 242) 00:10:30.098 10604.916 - 10664.495: 60.1342% ( 235) 00:10:30.098 10664.495 - 10724.073: 62.2705% ( 242) 00:10:30.098 10724.073 - 10783.651: 64.1861% ( 217) 00:10:30.098 10783.651 - 10843.229: 65.9516% ( 200) 00:10:30.098 10843.229 - 10902.807: 67.6024% ( 187) 00:10:30.098 10902.807 - 10962.385: 69.1914% ( 180) 00:10:30.098 10962.385 - 11021.964: 70.8598% ( 189) 00:10:30.098 11021.964 - 11081.542: 72.4665% ( 182) 00:10:30.098 11081.542 - 11141.120: 74.1084% ( 186) 00:10:30.098 11141.120 - 11200.698: 75.5561% ( 164) 00:10:30.098 11200.698 - 11260.276: 76.9686% ( 160) 00:10:30.098 11260.276 - 11319.855: 78.2662% ( 147) 00:10:30.098 11319.855 - 11379.433: 79.3785% ( 126) 00:10:30.098 11379.433 - 11439.011: 80.4290% ( 119) 00:10:30.098 11439.011 - 11498.589: 81.4354% ( 114) 00:10:30.098 11498.589 - 11558.167: 82.3005% ( 98) 00:10:30.098 11558.167 - 11617.745: 83.1038% ( 91) 00:10:30.098 11617.745 - 11677.324: 83.7394% ( 72) 00:10:30.098 11677.324 - 11736.902: 84.3750% ( 72) 00:10:30.098 11736.902 - 11796.480: 84.9576% ( 66) 00:10:30.098 11796.480 - 11856.058: 85.5314% ( 65) 00:10:30.098 11856.058 - 11915.636: 85.9993% ( 53) 00:10:30.098 11915.636 - 11975.215: 86.4495% ( 51) 00:10:30.098 11975.215 - 12034.793: 86.9350% ( 55) 00:10:30.098 12034.793 - 12094.371: 87.2793% ( 39) 00:10:30.098 12094.371 - 12153.949: 87.6942% ( 47) 00:10:30.098 12153.949 - 12213.527: 88.0915% ( 45) 00:10:30.098 12213.527 - 12273.105: 88.4710% ( 43) 00:10:30.098 12273.105 - 12332.684: 88.8242% ( 40) 00:10:30.098 12332.684 - 12392.262: 89.0890% ( 30) 00:10:30.098 12392.262 - 12451.840: 89.3891% ( 34) 00:10:30.098 12451.840 - 12511.418: 89.5922% ( 23) 00:10:30.098 12511.418 - 12570.996: 89.8129% ( 25) 00:10:30.098 12570.996 - 12630.575: 89.9982% ( 21) 00:10:30.098 12630.575 - 12690.153: 90.2013% ( 23) 00:10:30.098 12690.153 - 12749.731: 90.4308% ( 26) 00:10:30.098 12749.731 - 12809.309: 90.6338% ( 23) 00:10:30.098 12809.309 - 12868.887: 90.8633% ( 26) 00:10:30.098 12868.887 - 12928.465: 91.0576% ( 22) 00:10:30.098 12928.465 - 12988.044: 91.2341% ( 20) 00:10:30.098 12988.044 - 13047.622: 91.3665% ( 15) 00:10:30.098 13047.622 - 13107.200: 91.5078% ( 16) 00:10:30.098 13107.200 - 13166.778: 91.6225% ( 13) 00:10:30.098 13166.778 - 13226.356: 91.6931% ( 8) 00:10:30.098 13226.356 - 13285.935: 91.7549% ( 7) 00:10:30.098 13285.935 - 13345.513: 91.8079% ( 6) 00:10:30.098 13345.513 - 13405.091: 91.8520% ( 5) 00:10:30.098 13405.091 - 13464.669: 91.8874% ( 4) 00:10:30.098 13464.669 - 13524.247: 91.9227% ( 4) 00:10:30.098 13524.247 - 13583.825: 91.9580% ( 4) 00:10:30.098 13583.825 - 13643.404: 91.9845% ( 3) 00:10:30.098 13643.404 - 13702.982: 92.0109% ( 3) 00:10:30.098 13702.982 - 13762.560: 92.0374% ( 3) 00:10:30.098 13762.560 - 13822.138: 92.0639% ( 3) 00:10:30.098 13822.138 - 13881.716: 92.0904% ( 3) 00:10:30.098 17515.985 - 17635.142: 92.0992% ( 1) 00:10:30.098 17754.298 - 17873.455: 92.1169% ( 2) 00:10:30.098 17873.455 - 17992.611: 92.1522% ( 4) 00:10:30.098 17992.611 - 18111.767: 92.1963% ( 5) 00:10:30.098 18111.767 - 18230.924: 92.2405% ( 5) 00:10:30.098 18230.924 - 18350.080: 92.2669% ( 3) 00:10:30.098 18350.080 - 18469.236: 92.3552% ( 10) 00:10:30.098 18469.236 - 18588.393: 92.4876% ( 15) 00:10:30.098 18588.393 - 18707.549: 92.6024% ( 13) 00:10:30.098 18707.549 - 18826.705: 92.6554% ( 6) 00:10:30.098 18826.705 - 18945.862: 92.7260% ( 8) 00:10:30.098 18945.862 - 19065.018: 92.7878% ( 7) 00:10:30.098 19065.018 - 19184.175: 92.8407% ( 6) 00:10:30.098 19184.175 - 19303.331: 92.9114% ( 8) 00:10:30.098 19303.331 - 19422.487: 92.9555% ( 5) 00:10:30.098 19422.487 - 19541.644: 92.9996% ( 5) 00:10:30.098 19541.644 - 19660.800: 93.0438% ( 5) 00:10:30.098 19660.800 - 19779.956: 93.0968% ( 6) 00:10:30.098 19779.956 - 19899.113: 93.1409% ( 5) 00:10:30.098 19899.113 - 20018.269: 93.1850% ( 5) 00:10:30.098 20018.269 - 20137.425: 93.2027% ( 2) 00:10:30.098 20137.425 - 20256.582: 93.2203% ( 2) 00:10:30.098 20971.520 - 21090.676: 93.2292% ( 1) 00:10:30.098 21448.145 - 21567.302: 93.2380% ( 1) 00:10:30.098 21567.302 - 21686.458: 93.3439% ( 12) 00:10:30.098 21686.458 - 21805.615: 93.5558% ( 24) 00:10:30.098 21805.615 - 21924.771: 93.7853% ( 26) 00:10:30.098 21924.771 - 22043.927: 94.0855% ( 34) 00:10:30.098 22043.927 - 22163.084: 94.3415% ( 29) 00:10:30.098 22163.084 - 22282.240: 94.6504% ( 35) 00:10:30.098 22282.240 - 22401.396: 94.9506% ( 34) 00:10:30.098 22401.396 - 22520.553: 95.2948% ( 39) 00:10:30.098 22520.553 - 22639.709: 95.8245% ( 60) 00:10:30.098 22639.709 - 22758.865: 96.5307% ( 80) 00:10:30.098 22758.865 - 22878.022: 96.8573% ( 37) 00:10:30.098 22878.022 - 22997.178: 97.0251% ( 19) 00:10:30.098 22997.178 - 23116.335: 97.1663% ( 16) 00:10:30.098 23116.335 - 23235.491: 97.3517% ( 21) 00:10:30.098 23235.491 - 23354.647: 97.5282% ( 20) 00:10:30.099 23354.647 - 23473.804: 97.6871% ( 18) 00:10:30.099 23473.804 - 23592.960: 97.8549% ( 19) 00:10:30.099 23592.960 - 23712.116: 98.1020% ( 28) 00:10:30.099 23712.116 - 23831.273: 98.3051% ( 23) 00:10:30.099 23831.273 - 23950.429: 98.4463% ( 16) 00:10:30.099 23950.429 - 24069.585: 98.5611% ( 13) 00:10:30.099 24069.585 - 24188.742: 98.6405% ( 9) 00:10:30.099 24188.742 - 24307.898: 98.8436% ( 23) 00:10:30.099 24307.898 - 24427.055: 98.9407% ( 11) 00:10:30.099 24427.055 - 24546.211: 98.9936% ( 6) 00:10:30.099 24546.211 - 24665.367: 99.0201% ( 3) 00:10:30.099 24665.367 - 24784.524: 99.0554% ( 4) 00:10:30.099 24784.524 - 24903.680: 99.0907% ( 4) 00:10:30.099 24903.680 - 25022.836: 99.1261% ( 4) 00:10:30.099 25022.836 - 25141.993: 99.1702% ( 5) 00:10:30.099 25141.993 - 25261.149: 99.2055% ( 4) 00:10:30.099 25261.149 - 25380.305: 99.2496% ( 5) 00:10:30.099 25380.305 - 25499.462: 99.2761% ( 3) 00:10:30.099 25499.462 - 25618.618: 99.3114% ( 4) 00:10:30.099 25618.618 - 25737.775: 99.3468% ( 4) 00:10:30.099 25737.775 - 25856.931: 99.3732% ( 3) 00:10:30.099 25856.931 - 25976.087: 99.3997% ( 3) 00:10:30.099 25976.087 - 26095.244: 99.4350% ( 4) 00:10:30.099 31933.905 - 32172.218: 99.5056% ( 8) 00:10:30.099 32172.218 - 32410.531: 99.5763% ( 8) 00:10:30.099 32410.531 - 32648.844: 99.6381% ( 7) 00:10:30.099 32648.844 - 32887.156: 99.7087% ( 8) 00:10:30.099 32887.156 - 33125.469: 99.7705% ( 7) 00:10:30.099 33125.469 - 33363.782: 99.8411% ( 8) 00:10:30.099 33363.782 - 33602.095: 99.9117% ( 8) 00:10:30.099 33602.095 - 33840.407: 99.9823% ( 8) 00:10:30.099 33840.407 - 34078.720: 100.0000% ( 2) 00:10:30.099 00:10:30.099 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:30.099 ============================================================================== 00:10:30.099 Range in us Cumulative IO count 00:10:30.099 7983.476 - 8043.055: 0.0883% ( 10) 00:10:30.099 8043.055 - 8102.633: 0.1501% ( 7) 00:10:30.099 8102.633 - 8162.211: 0.3090% ( 18) 00:10:30.099 8162.211 - 8221.789: 0.5208% ( 24) 00:10:30.099 8221.789 - 8281.367: 0.8651% ( 39) 00:10:30.099 8281.367 - 8340.945: 1.2712% ( 46) 00:10:30.099 8340.945 - 8400.524: 1.6861% ( 47) 00:10:30.099 8400.524 - 8460.102: 2.3040% ( 70) 00:10:30.099 8460.102 - 8519.680: 2.8337% ( 60) 00:10:30.099 8519.680 - 8579.258: 3.3192% ( 55) 00:10:30.099 8579.258 - 8638.836: 3.8577% ( 61) 00:10:30.099 8638.836 - 8698.415: 4.3520% ( 56) 00:10:30.099 8698.415 - 8757.993: 4.9347% ( 66) 00:10:30.099 8757.993 - 8817.571: 5.8439% ( 103) 00:10:30.099 8817.571 - 8877.149: 6.8415% ( 113) 00:10:30.099 8877.149 - 8936.727: 8.1303% ( 146) 00:10:30.099 8936.727 - 8996.305: 9.5162% ( 157) 00:10:30.099 8996.305 - 9055.884: 10.9552% ( 163) 00:10:30.099 9055.884 - 9115.462: 12.4117% ( 165) 00:10:30.099 9115.462 - 9175.040: 13.9124% ( 170) 00:10:30.099 9175.040 - 9234.618: 15.5720% ( 188) 00:10:30.099 9234.618 - 9294.196: 17.3199% ( 198) 00:10:30.099 9294.196 - 9353.775: 19.2973% ( 224) 00:10:30.099 9353.775 - 9413.353: 21.2394% ( 220) 00:10:30.099 9413.353 - 9472.931: 22.9961% ( 199) 00:10:30.099 9472.931 - 9532.509: 25.0353% ( 231) 00:10:30.099 9532.509 - 9592.087: 27.2246% ( 248) 00:10:30.099 9592.087 - 9651.665: 29.0960% ( 212) 00:10:30.099 9651.665 - 9711.244: 30.9763% ( 213) 00:10:30.099 9711.244 - 9770.822: 32.8390% ( 211) 00:10:30.099 9770.822 - 9830.400: 34.7281% ( 214) 00:10:30.099 9830.400 - 9889.978: 36.5907% ( 211) 00:10:30.099 9889.978 - 9949.556: 37.9679% ( 156) 00:10:30.099 9949.556 - 10009.135: 39.3626% ( 158) 00:10:30.099 10009.135 - 10068.713: 40.9693% ( 182) 00:10:30.099 10068.713 - 10128.291: 42.3905% ( 161) 00:10:30.099 10128.291 - 10187.869: 43.7765% ( 157) 00:10:30.099 10187.869 - 10247.447: 45.2860% ( 171) 00:10:30.099 10247.447 - 10307.025: 47.0074% ( 195) 00:10:30.099 10307.025 - 10366.604: 48.8789% ( 212) 00:10:30.099 10366.604 - 10426.182: 51.0064% ( 241) 00:10:30.099 10426.182 - 10485.760: 53.3280% ( 263) 00:10:30.099 10485.760 - 10545.338: 55.4996% ( 246) 00:10:30.099 10545.338 - 10604.916: 57.6271% ( 241) 00:10:30.099 10604.916 - 10664.495: 59.7722% ( 243) 00:10:30.099 10664.495 - 10724.073: 61.6790% ( 216) 00:10:30.099 10724.073 - 10783.651: 63.6123% ( 219) 00:10:30.099 10783.651 - 10843.229: 65.4926% ( 213) 00:10:30.099 10843.229 - 10902.807: 67.2846% ( 203) 00:10:30.099 10902.807 - 10962.385: 68.9442% ( 188) 00:10:30.099 10962.385 - 11021.964: 70.6391% ( 192) 00:10:30.099 11021.964 - 11081.542: 72.3076% ( 189) 00:10:30.099 11081.542 - 11141.120: 73.7730% ( 166) 00:10:30.099 11141.120 - 11200.698: 75.3266% ( 176) 00:10:30.099 11200.698 - 11260.276: 76.9156% ( 180) 00:10:30.099 11260.276 - 11319.855: 78.3016% ( 157) 00:10:30.099 11319.855 - 11379.433: 79.4403% ( 129) 00:10:30.099 11379.433 - 11439.011: 80.4379% ( 113) 00:10:30.099 11439.011 - 11498.589: 81.3559% ( 104) 00:10:30.099 11498.589 - 11558.167: 82.1681% ( 92) 00:10:30.099 11558.167 - 11617.745: 82.8478% ( 77) 00:10:30.099 11617.745 - 11677.324: 83.4746% ( 71) 00:10:30.099 11677.324 - 11736.902: 84.2073% ( 83) 00:10:30.099 11736.902 - 11796.480: 84.8870% ( 77) 00:10:30.099 11796.480 - 11856.058: 85.4167% ( 60) 00:10:30.099 11856.058 - 11915.636: 85.8581% ( 50) 00:10:30.099 11915.636 - 11975.215: 86.2818% ( 48) 00:10:30.099 11975.215 - 12034.793: 86.6879% ( 46) 00:10:30.099 12034.793 - 12094.371: 87.1028% ( 47) 00:10:30.099 12094.371 - 12153.949: 87.5265% ( 48) 00:10:30.099 12153.949 - 12213.527: 88.0032% ( 54) 00:10:30.099 12213.527 - 12273.105: 88.3651% ( 41) 00:10:30.099 12273.105 - 12332.684: 88.6917% ( 37) 00:10:30.099 12332.684 - 12392.262: 88.8683% ( 20) 00:10:30.099 12392.262 - 12451.840: 89.0978% ( 26) 00:10:30.099 12451.840 - 12511.418: 89.3980% ( 34) 00:10:30.099 12511.418 - 12570.996: 89.6628% ( 30) 00:10:30.099 12570.996 - 12630.575: 89.9100% ( 28) 00:10:30.099 12630.575 - 12690.153: 90.1483% ( 27) 00:10:30.099 12690.153 - 12749.731: 90.2984% ( 17) 00:10:30.099 12749.731 - 12809.309: 90.4661% ( 19) 00:10:30.099 12809.309 - 12868.887: 90.6250% ( 18) 00:10:30.099 12868.887 - 12928.465: 90.7751% ( 17) 00:10:30.099 12928.465 - 12988.044: 90.9340% ( 18) 00:10:30.099 12988.044 - 13047.622: 91.0752% ( 16) 00:10:30.099 13047.622 - 13107.200: 91.1988% ( 14) 00:10:30.099 13107.200 - 13166.778: 91.3047% ( 12) 00:10:30.099 13166.778 - 13226.356: 91.4283% ( 14) 00:10:30.099 13226.356 - 13285.935: 91.5431% ( 13) 00:10:30.099 13285.935 - 13345.513: 91.6049% ( 7) 00:10:30.099 13345.513 - 13405.091: 91.6843% ( 9) 00:10:30.099 13405.091 - 13464.669: 91.7373% ( 6) 00:10:30.099 13464.669 - 13524.247: 91.8079% ( 8) 00:10:30.099 13524.247 - 13583.825: 91.8432% ( 4) 00:10:30.099 13583.825 - 13643.404: 91.8962% ( 6) 00:10:30.099 13643.404 - 13702.982: 91.9492% ( 6) 00:10:30.099 13702.982 - 13762.560: 91.9845% ( 4) 00:10:30.099 13762.560 - 13822.138: 92.0198% ( 4) 00:10:30.099 13822.138 - 13881.716: 92.0463% ( 3) 00:10:30.099 13881.716 - 13941.295: 92.0551% ( 1) 00:10:30.099 13941.295 - 14000.873: 92.0639% ( 1) 00:10:30.099 14000.873 - 14060.451: 92.0727% ( 1) 00:10:30.099 14060.451 - 14120.029: 92.0816% ( 1) 00:10:30.099 14120.029 - 14179.607: 92.0904% ( 1) 00:10:30.099 17396.829 - 17515.985: 92.1963% ( 12) 00:10:30.099 17515.985 - 17635.142: 92.2934% ( 11) 00:10:30.099 17635.142 - 17754.298: 92.3199% ( 3) 00:10:30.099 17754.298 - 17873.455: 92.3464% ( 3) 00:10:30.099 17873.455 - 17992.611: 92.3641% ( 2) 00:10:30.099 17992.611 - 18111.767: 92.3905% ( 3) 00:10:30.099 18111.767 - 18230.924: 92.4258% ( 4) 00:10:30.099 18230.924 - 18350.080: 92.4435% ( 2) 00:10:30.099 18350.080 - 18469.236: 92.4700% ( 3) 00:10:30.099 18469.236 - 18588.393: 92.5053% ( 4) 00:10:30.099 18588.393 - 18707.549: 92.5318% ( 3) 00:10:30.099 18707.549 - 18826.705: 92.5847% ( 6) 00:10:30.099 18826.705 - 18945.862: 92.6465% ( 7) 00:10:30.099 18945.862 - 19065.018: 92.7172% ( 8) 00:10:30.099 19065.018 - 19184.175: 92.7790% ( 7) 00:10:30.100 19184.175 - 19303.331: 92.8407% ( 7) 00:10:30.100 19303.331 - 19422.487: 92.8761% ( 4) 00:10:30.100 19422.487 - 19541.644: 92.9114% ( 4) 00:10:30.100 19541.644 - 19660.800: 92.9379% ( 3) 00:10:30.100 19660.800 - 19779.956: 92.9643% ( 3) 00:10:30.100 19779.956 - 19899.113: 92.9820% ( 2) 00:10:30.100 19899.113 - 20018.269: 92.9908% ( 1) 00:10:30.100 20018.269 - 20137.425: 93.0173% ( 3) 00:10:30.100 20137.425 - 20256.582: 93.0438% ( 3) 00:10:30.100 20256.582 - 20375.738: 93.0703% ( 3) 00:10:30.100 20375.738 - 20494.895: 93.0968% ( 3) 00:10:30.100 20494.895 - 20614.051: 93.1144% ( 2) 00:10:30.100 20614.051 - 20733.207: 93.1409% ( 3) 00:10:30.100 20733.207 - 20852.364: 93.1762% ( 4) 00:10:30.100 20852.364 - 20971.520: 93.2027% ( 3) 00:10:30.100 20971.520 - 21090.676: 93.2292% ( 3) 00:10:30.100 21090.676 - 21209.833: 93.2380% ( 1) 00:10:30.100 21209.833 - 21328.989: 93.2733% ( 4) 00:10:30.100 21328.989 - 21448.145: 93.4322% ( 18) 00:10:30.100 21448.145 - 21567.302: 93.4763% ( 5) 00:10:30.100 21567.302 - 21686.458: 93.5646% ( 10) 00:10:30.100 21686.458 - 21805.615: 93.6441% ( 9) 00:10:30.100 21805.615 - 21924.771: 93.8559% ( 24) 00:10:30.100 21924.771 - 22043.927: 94.0943% ( 27) 00:10:30.100 22043.927 - 22163.084: 94.3679% ( 31) 00:10:30.100 22163.084 - 22282.240: 94.6857% ( 36) 00:10:30.100 22282.240 - 22401.396: 94.9682% ( 32) 00:10:30.100 22401.396 - 22520.553: 95.4449% ( 54) 00:10:30.100 22520.553 - 22639.709: 95.9922% ( 62) 00:10:30.100 22639.709 - 22758.865: 96.9280% ( 106) 00:10:30.100 22758.865 - 22878.022: 97.3164% ( 44) 00:10:30.100 22878.022 - 22997.178: 97.5900% ( 31) 00:10:30.100 22997.178 - 23116.335: 97.7489% ( 18) 00:10:30.100 23116.335 - 23235.491: 98.0667% ( 36) 00:10:30.100 23235.491 - 23354.647: 98.2786% ( 24) 00:10:30.100 23354.647 - 23473.804: 98.4552% ( 20) 00:10:30.100 23473.804 - 23592.960: 98.6317% ( 20) 00:10:30.100 23592.960 - 23712.116: 98.7994% ( 19) 00:10:30.100 23712.116 - 23831.273: 98.9583% ( 18) 00:10:30.100 23831.273 - 23950.429: 99.0643% ( 12) 00:10:30.100 23950.429 - 24069.585: 99.1614% ( 11) 00:10:30.100 24069.585 - 24188.742: 99.2320% ( 8) 00:10:30.100 24188.742 - 24307.898: 99.2938% ( 7) 00:10:30.100 24307.898 - 24427.055: 99.3379% ( 5) 00:10:30.100 24427.055 - 24546.211: 99.3821% ( 5) 00:10:30.100 24546.211 - 24665.367: 99.4085% ( 3) 00:10:30.100 24665.367 - 24784.524: 99.4174% ( 1) 00:10:30.100 24784.524 - 24903.680: 99.4350% ( 2) 00:10:30.100 30146.560 - 30265.716: 99.4703% ( 4) 00:10:30.100 30265.716 - 30384.873: 99.5056% ( 4) 00:10:30.100 30384.873 - 30504.029: 99.5410% ( 4) 00:10:30.100 30504.029 - 30742.342: 99.6028% ( 7) 00:10:30.100 30742.342 - 30980.655: 99.6734% ( 8) 00:10:30.100 30980.655 - 31218.967: 99.7528% ( 9) 00:10:30.100 31218.967 - 31457.280: 99.8234% ( 8) 00:10:30.100 31457.280 - 31695.593: 99.8852% ( 7) 00:10:30.100 31695.593 - 31933.905: 99.9470% ( 7) 00:10:30.100 31933.905 - 32172.218: 100.0000% ( 6) 00:10:30.100 00:10:30.100 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:30.100 ============================================================================== 00:10:30.100 Range in us Cumulative IO count 00:10:30.100 7864.320 - 7923.898: 0.0088% ( 1) 00:10:30.100 7983.476 - 8043.055: 0.0353% ( 3) 00:10:30.100 8043.055 - 8102.633: 0.0618% ( 3) 00:10:30.100 8102.633 - 8162.211: 0.2295% ( 19) 00:10:30.100 8162.211 - 8221.789: 0.5473% ( 36) 00:10:30.100 8221.789 - 8281.367: 0.9887% ( 50) 00:10:30.100 8281.367 - 8340.945: 1.4477% ( 52) 00:10:30.100 8340.945 - 8400.524: 2.0569% ( 69) 00:10:30.100 8400.524 - 8460.102: 2.5424% ( 55) 00:10:30.100 8460.102 - 8519.680: 2.9838% ( 50) 00:10:30.100 8519.680 - 8579.258: 3.4163% ( 49) 00:10:30.100 8579.258 - 8638.836: 3.7959% ( 43) 00:10:30.100 8638.836 - 8698.415: 4.3344% ( 61) 00:10:30.100 8698.415 - 8757.993: 5.0583% ( 82) 00:10:30.100 8757.993 - 8817.571: 5.8174% ( 86) 00:10:30.100 8817.571 - 8877.149: 6.8856% ( 121) 00:10:30.100 8877.149 - 8936.727: 8.1744% ( 146) 00:10:30.100 8936.727 - 8996.305: 9.6928% ( 172) 00:10:30.100 8996.305 - 9055.884: 11.0876% ( 158) 00:10:30.100 9055.884 - 9115.462: 12.4823% ( 158) 00:10:30.100 9115.462 - 9175.040: 13.7624% ( 145) 00:10:30.100 9175.040 - 9234.618: 15.2278% ( 166) 00:10:30.100 9234.618 - 9294.196: 16.7814% ( 176) 00:10:30.100 9294.196 - 9353.775: 18.4940% ( 194) 00:10:30.100 9353.775 - 9413.353: 20.4449% ( 221) 00:10:30.100 9413.353 - 9472.931: 22.4753% ( 230) 00:10:30.100 9472.931 - 9532.509: 24.5498% ( 235) 00:10:30.100 9532.509 - 9592.087: 26.5007% ( 221) 00:10:30.100 9592.087 - 9651.665: 28.6811% ( 247) 00:10:30.100 9651.665 - 9711.244: 30.6321% ( 221) 00:10:30.100 9711.244 - 9770.822: 32.7242% ( 237) 00:10:30.100 9770.822 - 9830.400: 34.6840% ( 222) 00:10:30.100 9830.400 - 9889.978: 36.3612% ( 190) 00:10:30.100 9889.978 - 9949.556: 37.9679% ( 182) 00:10:30.100 9949.556 - 10009.135: 39.6275% ( 188) 00:10:30.100 10009.135 - 10068.713: 41.1547% ( 173) 00:10:30.100 10068.713 - 10128.291: 42.5671% ( 160) 00:10:30.100 10128.291 - 10187.869: 44.1649% ( 181) 00:10:30.100 10187.869 - 10247.447: 45.9481% ( 202) 00:10:30.100 10247.447 - 10307.025: 48.0138% ( 234) 00:10:30.100 10307.025 - 10366.604: 49.9647% ( 221) 00:10:30.100 10366.604 - 10426.182: 51.8715% ( 216) 00:10:30.100 10426.182 - 10485.760: 53.8577% ( 225) 00:10:30.100 10485.760 - 10545.338: 55.8263% ( 223) 00:10:30.100 10545.338 - 10604.916: 57.8302% ( 227) 00:10:30.100 10604.916 - 10664.495: 59.8958% ( 234) 00:10:30.100 10664.495 - 10724.073: 61.6437% ( 198) 00:10:30.100 10724.073 - 10783.651: 63.2592% ( 183) 00:10:30.100 10783.651 - 10843.229: 65.1218% ( 211) 00:10:30.100 10843.229 - 10902.807: 67.0992% ( 224) 00:10:30.100 10902.807 - 10962.385: 68.7941% ( 192) 00:10:30.100 10962.385 - 11021.964: 70.4361% ( 186) 00:10:30.100 11021.964 - 11081.542: 72.0516% ( 183) 00:10:30.100 11081.542 - 11141.120: 73.6052% ( 176) 00:10:30.100 11141.120 - 11200.698: 74.9912% ( 157) 00:10:30.100 11200.698 - 11260.276: 76.4742% ( 168) 00:10:30.100 11260.276 - 11319.855: 77.9484% ( 167) 00:10:30.100 11319.855 - 11379.433: 79.1490% ( 136) 00:10:30.100 11379.433 - 11439.011: 80.2701% ( 127) 00:10:30.100 11439.011 - 11498.589: 81.3118% ( 118) 00:10:30.100 11498.589 - 11558.167: 82.2652% ( 108) 00:10:30.100 11558.167 - 11617.745: 83.1391% ( 99) 00:10:30.100 11617.745 - 11677.324: 83.9513% ( 92) 00:10:30.100 11677.324 - 11736.902: 84.6310% ( 77) 00:10:30.100 11736.902 - 11796.480: 85.2048% ( 65) 00:10:30.100 11796.480 - 11856.058: 85.7698% ( 64) 00:10:30.100 11856.058 - 11915.636: 86.2818% ( 58) 00:10:30.100 11915.636 - 11975.215: 86.6790% ( 45) 00:10:30.100 11975.215 - 12034.793: 87.0763% ( 45) 00:10:30.100 12034.793 - 12094.371: 87.5265% ( 51) 00:10:30.100 12094.371 - 12153.949: 87.8531% ( 37) 00:10:30.100 12153.949 - 12213.527: 88.1356% ( 32) 00:10:30.100 12213.527 - 12273.105: 88.4269% ( 33) 00:10:30.100 12273.105 - 12332.684: 88.7006% ( 31) 00:10:30.100 12332.684 - 12392.262: 88.9213% ( 25) 00:10:30.100 12392.262 - 12451.840: 89.1331% ( 24) 00:10:30.100 12451.840 - 12511.418: 89.2744% ( 16) 00:10:30.100 12511.418 - 12570.996: 89.4068% ( 15) 00:10:30.100 12570.996 - 12630.575: 89.5745% ( 19) 00:10:30.100 12630.575 - 12690.153: 89.7864% ( 24) 00:10:30.100 12690.153 - 12749.731: 89.9541% ( 19) 00:10:30.100 12749.731 - 12809.309: 90.0953% ( 16) 00:10:30.100 12809.309 - 12868.887: 90.2189% ( 14) 00:10:30.100 12868.887 - 12928.465: 90.3160% ( 11) 00:10:30.100 12928.465 - 12988.044: 90.4396% ( 14) 00:10:30.101 12988.044 - 13047.622: 90.5456% ( 12) 00:10:30.101 13047.622 - 13107.200: 90.6427% ( 11) 00:10:30.101 13107.200 - 13166.778: 90.7574% ( 13) 00:10:30.101 13166.778 - 13226.356: 90.8810% ( 14) 00:10:30.101 13226.356 - 13285.935: 90.9869% ( 12) 00:10:30.101 13285.935 - 13345.513: 91.1105% ( 14) 00:10:30.101 13345.513 - 13405.091: 91.1988% ( 10) 00:10:30.101 13405.091 - 13464.669: 91.2959% ( 11) 00:10:30.101 13464.669 - 13524.247: 91.3930% ( 11) 00:10:30.101 13524.247 - 13583.825: 91.4813% ( 10) 00:10:30.101 13583.825 - 13643.404: 91.5607% ( 9) 00:10:30.101 13643.404 - 13702.982: 91.6314% ( 8) 00:10:30.101 13702.982 - 13762.560: 91.6755% ( 5) 00:10:30.101 13762.560 - 13822.138: 91.7020% ( 3) 00:10:30.101 13822.138 - 13881.716: 91.7108% ( 1) 00:10:30.101 13881.716 - 13941.295: 91.7285% ( 2) 00:10:30.101 13941.295 - 14000.873: 91.7461% ( 2) 00:10:30.101 14000.873 - 14060.451: 91.7638% ( 2) 00:10:30.101 14060.451 - 14120.029: 91.7726% ( 1) 00:10:30.101 14120.029 - 14179.607: 91.7991% ( 3) 00:10:30.101 14179.607 - 14239.185: 91.8079% ( 1) 00:10:30.101 14239.185 - 14298.764: 91.8344% ( 3) 00:10:30.101 14298.764 - 14358.342: 91.8520% ( 2) 00:10:30.101 14358.342 - 14417.920: 91.8697% ( 2) 00:10:30.101 14417.920 - 14477.498: 91.8785% ( 1) 00:10:30.101 14477.498 - 14537.076: 91.8874% ( 1) 00:10:30.101 14537.076 - 14596.655: 91.9050% ( 2) 00:10:30.101 14596.655 - 14656.233: 91.9138% ( 1) 00:10:30.101 14715.811 - 14775.389: 91.9315% ( 2) 00:10:30.101 14834.967 - 14894.545: 91.9492% ( 2) 00:10:30.101 14954.124 - 15013.702: 91.9668% ( 2) 00:10:30.101 15013.702 - 15073.280: 91.9845% ( 2) 00:10:30.101 15073.280 - 15132.858: 91.9933% ( 1) 00:10:30.101 15132.858 - 15192.436: 92.0109% ( 2) 00:10:30.101 15192.436 - 15252.015: 92.0198% ( 1) 00:10:30.101 15252.015 - 15371.171: 92.0463% ( 3) 00:10:30.101 15371.171 - 15490.327: 92.0727% ( 3) 00:10:30.101 15490.327 - 15609.484: 92.0904% ( 2) 00:10:30.101 16205.265 - 16324.422: 92.0992% ( 1) 00:10:30.101 16324.422 - 16443.578: 92.2140% ( 13) 00:10:30.101 16443.578 - 16562.735: 92.3199% ( 12) 00:10:30.101 16562.735 - 16681.891: 92.3464% ( 3) 00:10:30.101 16681.891 - 16801.047: 92.3817% ( 4) 00:10:30.101 16801.047 - 16920.204: 92.3994% ( 2) 00:10:30.101 16920.204 - 17039.360: 92.4258% ( 3) 00:10:30.101 17039.360 - 17158.516: 92.4523% ( 3) 00:10:30.101 17158.516 - 17277.673: 92.4788% ( 3) 00:10:30.101 17277.673 - 17396.829: 92.5053% ( 3) 00:10:30.101 17396.829 - 17515.985: 92.5318% ( 3) 00:10:30.101 17515.985 - 17635.142: 92.5583% ( 3) 00:10:30.101 17635.142 - 17754.298: 92.5847% ( 3) 00:10:30.101 17754.298 - 17873.455: 92.6112% ( 3) 00:10:30.101 17873.455 - 17992.611: 92.6377% ( 3) 00:10:30.101 17992.611 - 18111.767: 92.6554% ( 2) 00:10:30.101 19779.956 - 19899.113: 92.6642% ( 1) 00:10:30.101 19899.113 - 20018.269: 92.6907% ( 3) 00:10:30.101 20018.269 - 20137.425: 92.7260% ( 4) 00:10:30.101 20137.425 - 20256.582: 92.7701% ( 5) 00:10:30.101 20256.582 - 20375.738: 92.7878% ( 2) 00:10:30.101 20375.738 - 20494.895: 92.8231% ( 4) 00:10:30.101 20494.895 - 20614.051: 92.9379% ( 13) 00:10:30.101 20614.051 - 20733.207: 93.0791% ( 16) 00:10:30.101 20733.207 - 20852.364: 93.1409% ( 7) 00:10:30.101 20852.364 - 20971.520: 93.2027% ( 7) 00:10:30.101 20971.520 - 21090.676: 93.2468% ( 5) 00:10:30.101 21090.676 - 21209.833: 93.3086% ( 7) 00:10:30.101 21209.833 - 21328.989: 93.3969% ( 10) 00:10:30.101 21328.989 - 21448.145: 93.5117% ( 13) 00:10:30.101 21448.145 - 21567.302: 93.6441% ( 15) 00:10:30.101 21567.302 - 21686.458: 93.8030% ( 18) 00:10:30.101 21686.458 - 21805.615: 94.0325% ( 26) 00:10:30.101 21805.615 - 21924.771: 94.3326% ( 34) 00:10:30.101 21924.771 - 22043.927: 94.6504% ( 36) 00:10:30.101 22043.927 - 22163.084: 94.9682% ( 36) 00:10:30.101 22163.084 - 22282.240: 95.2860% ( 36) 00:10:30.101 22282.240 - 22401.396: 95.5685% ( 32) 00:10:30.101 22401.396 - 22520.553: 95.9128% ( 39) 00:10:30.101 22520.553 - 22639.709: 96.5572% ( 73) 00:10:30.101 22639.709 - 22758.865: 97.3252% ( 87) 00:10:30.101 22758.865 - 22878.022: 97.6960% ( 42) 00:10:30.101 22878.022 - 22997.178: 98.0049% ( 35) 00:10:30.101 22997.178 - 23116.335: 98.2698% ( 30) 00:10:30.101 23116.335 - 23235.491: 98.4375% ( 19) 00:10:30.101 23235.491 - 23354.647: 98.6052% ( 19) 00:10:30.101 23354.647 - 23473.804: 98.7641% ( 18) 00:10:30.101 23473.804 - 23592.960: 98.9054% ( 16) 00:10:30.101 23592.960 - 23712.116: 99.0378% ( 15) 00:10:30.101 23712.116 - 23831.273: 99.1702% ( 15) 00:10:30.101 23831.273 - 23950.429: 99.2673% ( 11) 00:10:30.101 23950.429 - 24069.585: 99.3291% ( 7) 00:10:30.101 24069.585 - 24188.742: 99.3644% ( 4) 00:10:30.101 24188.742 - 24307.898: 99.3997% ( 4) 00:10:30.101 24307.898 - 24427.055: 99.4350% ( 4) 00:10:30.101 27286.807 - 27405.964: 99.5145% ( 9) 00:10:30.101 28478.371 - 28597.527: 99.5233% ( 1) 00:10:30.101 28597.527 - 28716.684: 99.5498% ( 3) 00:10:30.101 28716.684 - 28835.840: 99.5763% ( 3) 00:10:30.101 28835.840 - 28954.996: 99.6116% ( 4) 00:10:30.101 28954.996 - 29074.153: 99.6381% ( 3) 00:10:30.101 29074.153 - 29193.309: 99.6645% ( 3) 00:10:30.101 29193.309 - 29312.465: 99.6910% ( 3) 00:10:30.101 29312.465 - 29431.622: 99.7263% ( 4) 00:10:30.101 29431.622 - 29550.778: 99.7617% ( 4) 00:10:30.101 29550.778 - 29669.935: 99.7970% ( 4) 00:10:30.101 29669.935 - 29789.091: 99.8323% ( 4) 00:10:30.101 29789.091 - 29908.247: 99.8676% ( 4) 00:10:30.101 29908.247 - 30027.404: 99.9117% ( 5) 00:10:30.101 30027.404 - 30146.560: 99.9382% ( 3) 00:10:30.101 30146.560 - 30265.716: 99.9823% ( 5) 00:10:30.101 30265.716 - 30384.873: 100.0000% ( 2) 00:10:30.101 00:10:30.101 21:09:41 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:10:30.101 00:10:30.101 real 0m2.741s 00:10:30.101 user 0m2.318s 00:10:30.101 sys 0m0.307s 00:10:30.101 21:09:41 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:30.101 21:09:41 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:10:30.101 ************************************ 00:10:30.101 END TEST nvme_perf 00:10:30.101 ************************************ 00:10:30.101 21:09:41 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:30.101 21:09:41 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:30.101 21:09:41 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:30.101 21:09:41 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.101 21:09:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:30.101 ************************************ 00:10:30.101 START TEST nvme_hello_world 00:10:30.101 ************************************ 00:10:30.101 21:09:41 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:30.101 Initializing NVMe Controllers 00:10:30.101 Attached to 0000:00:10.0 00:10:30.101 Namespace ID: 1 size: 6GB 00:10:30.101 Attached to 0000:00:11.0 00:10:30.101 Namespace ID: 1 size: 5GB 00:10:30.101 Attached to 0000:00:13.0 00:10:30.101 Namespace ID: 1 size: 1GB 00:10:30.101 Attached to 0000:00:12.0 00:10:30.101 Namespace ID: 1 size: 4GB 00:10:30.101 Namespace ID: 2 size: 4GB 00:10:30.101 Namespace ID: 3 size: 4GB 00:10:30.101 Initialization complete. 00:10:30.101 INFO: using host memory buffer for IO 00:10:30.101 Hello world! 00:10:30.101 INFO: using host memory buffer for IO 00:10:30.101 Hello world! 00:10:30.101 INFO: using host memory buffer for IO 00:10:30.101 Hello world! 00:10:30.101 INFO: using host memory buffer for IO 00:10:30.101 Hello world! 00:10:30.101 INFO: using host memory buffer for IO 00:10:30.101 Hello world! 00:10:30.101 INFO: using host memory buffer for IO 00:10:30.101 Hello world! 00:10:30.101 00:10:30.101 real 0m0.314s 00:10:30.101 user 0m0.115s 00:10:30.101 sys 0m0.151s 00:10:30.101 21:09:41 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:30.101 21:09:41 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:30.101 ************************************ 00:10:30.101 END TEST nvme_hello_world 00:10:30.101 ************************************ 00:10:30.361 21:09:41 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:30.361 21:09:41 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:30.361 21:09:41 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:30.361 21:09:41 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.361 21:09:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:30.361 ************************************ 00:10:30.361 START TEST nvme_sgl 00:10:30.361 ************************************ 00:10:30.361 21:09:41 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:30.361 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:10:30.361 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:10:30.361 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:10:30.620 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:10:30.620 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:10:30.620 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:10:30.620 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:10:30.620 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:10:30.620 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:10:30.620 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:10:30.620 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:10:30.620 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:10:30.620 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:10:30.620 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:10:30.620 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:10:30.620 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:10:30.620 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:10:30.620 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:10:30.620 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:10:30.620 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:10:30.620 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:10:30.620 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:10:30.620 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:10:30.620 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:10:30.620 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:10:30.620 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:10:30.620 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:10:30.620 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:10:30.620 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:10:30.620 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:10:30.620 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:10:30.620 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:10:30.620 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:10:30.620 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:10:30.620 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:10:30.620 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:10:30.620 NVMe Readv/Writev Request test 00:10:30.620 Attached to 0000:00:10.0 00:10:30.620 Attached to 0000:00:11.0 00:10:30.620 Attached to 0000:00:13.0 00:10:30.620 Attached to 0000:00:12.0 00:10:30.620 0000:00:10.0: build_io_request_2 test passed 00:10:30.620 0000:00:10.0: build_io_request_4 test passed 00:10:30.620 0000:00:10.0: build_io_request_5 test passed 00:10:30.620 0000:00:10.0: build_io_request_6 test passed 00:10:30.620 0000:00:10.0: build_io_request_7 test passed 00:10:30.620 0000:00:10.0: build_io_request_10 test passed 00:10:30.620 0000:00:11.0: build_io_request_2 test passed 00:10:30.620 0000:00:11.0: build_io_request_4 test passed 00:10:30.620 0000:00:11.0: build_io_request_5 test passed 00:10:30.620 0000:00:11.0: build_io_request_6 test passed 00:10:30.620 0000:00:11.0: build_io_request_7 test passed 00:10:30.620 0000:00:11.0: build_io_request_10 test passed 00:10:30.620 Cleaning up... 00:10:30.620 00:10:30.620 real 0m0.344s 00:10:30.620 user 0m0.187s 00:10:30.620 sys 0m0.118s 00:10:30.621 21:09:42 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:30.621 21:09:42 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:10:30.621 ************************************ 00:10:30.621 END TEST nvme_sgl 00:10:30.621 ************************************ 00:10:30.621 21:09:42 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:30.621 21:09:42 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:30.621 21:09:42 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:30.621 21:09:42 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.621 21:09:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:30.621 ************************************ 00:10:30.621 START TEST nvme_e2edp 00:10:30.621 ************************************ 00:10:30.621 21:09:42 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:30.880 NVMe Write/Read with End-to-End data protection test 00:10:30.880 Attached to 0000:00:10.0 00:10:30.880 Attached to 0000:00:11.0 00:10:30.880 Attached to 0000:00:13.0 00:10:30.880 Attached to 0000:00:12.0 00:10:30.880 Cleaning up... 00:10:30.880 00:10:30.880 real 0m0.290s 00:10:30.880 user 0m0.115s 00:10:30.880 sys 0m0.131s 00:10:30.880 21:09:42 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:30.880 21:09:42 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:10:30.880 ************************************ 00:10:30.880 END TEST nvme_e2edp 00:10:30.880 ************************************ 00:10:30.880 21:09:42 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:30.880 21:09:42 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:30.880 21:09:42 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:30.880 21:09:42 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.880 21:09:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:30.880 ************************************ 00:10:30.880 START TEST nvme_reserve 00:10:30.880 ************************************ 00:10:30.880 21:09:42 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:31.447 ===================================================== 00:10:31.447 NVMe Controller at PCI bus 0, device 16, function 0 00:10:31.447 ===================================================== 00:10:31.447 Reservations: Not Supported 00:10:31.447 ===================================================== 00:10:31.447 NVMe Controller at PCI bus 0, device 17, function 0 00:10:31.447 ===================================================== 00:10:31.447 Reservations: Not Supported 00:10:31.447 ===================================================== 00:10:31.447 NVMe Controller at PCI bus 0, device 19, function 0 00:10:31.447 ===================================================== 00:10:31.447 Reservations: Not Supported 00:10:31.447 ===================================================== 00:10:31.447 NVMe Controller at PCI bus 0, device 18, function 0 00:10:31.447 ===================================================== 00:10:31.447 Reservations: Not Supported 00:10:31.447 Reservation test passed 00:10:31.447 00:10:31.447 real 0m0.295s 00:10:31.447 user 0m0.117s 00:10:31.447 sys 0m0.133s 00:10:31.447 21:09:42 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:31.447 21:09:42 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:10:31.447 ************************************ 00:10:31.447 END TEST nvme_reserve 00:10:31.447 ************************************ 00:10:31.447 21:09:42 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:31.448 21:09:42 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:31.448 21:09:42 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:31.448 21:09:42 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.448 21:09:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:31.448 ************************************ 00:10:31.448 START TEST nvme_err_injection 00:10:31.448 ************************************ 00:10:31.448 21:09:42 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:31.707 NVMe Error Injection test 00:10:31.707 Attached to 0000:00:10.0 00:10:31.707 Attached to 0000:00:11.0 00:10:31.707 Attached to 0000:00:13.0 00:10:31.707 Attached to 0000:00:12.0 00:10:31.707 0000:00:10.0: get features failed as expected 00:10:31.707 0000:00:11.0: get features failed as expected 00:10:31.707 0000:00:13.0: get features failed as expected 00:10:31.707 0000:00:12.0: get features failed as expected 00:10:31.707 0000:00:10.0: get features successfully as expected 00:10:31.707 0000:00:11.0: get features successfully as expected 00:10:31.707 0000:00:13.0: get features successfully as expected 00:10:31.707 0000:00:12.0: get features successfully as expected 00:10:31.707 0000:00:10.0: read failed as expected 00:10:31.707 0000:00:11.0: read failed as expected 00:10:31.707 0000:00:13.0: read failed as expected 00:10:31.707 0000:00:12.0: read failed as expected 00:10:31.707 0000:00:10.0: read successfully as expected 00:10:31.707 0000:00:11.0: read successfully as expected 00:10:31.707 0000:00:13.0: read successfully as expected 00:10:31.707 0000:00:12.0: read successfully as expected 00:10:31.707 Cleaning up... 00:10:31.707 00:10:31.707 real 0m0.300s 00:10:31.707 user 0m0.120s 00:10:31.707 sys 0m0.140s 00:10:31.707 21:09:43 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:31.707 21:09:43 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:10:31.707 ************************************ 00:10:31.707 END TEST nvme_err_injection 00:10:31.707 ************************************ 00:10:31.707 21:09:43 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:31.707 21:09:43 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:31.707 21:09:43 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:31.707 21:09:43 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.707 21:09:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:31.707 ************************************ 00:10:31.707 START TEST nvme_overhead 00:10:31.707 ************************************ 00:10:31.707 21:09:43 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:33.081 Initializing NVMe Controllers 00:10:33.081 Attached to 0000:00:10.0 00:10:33.081 Attached to 0000:00:11.0 00:10:33.081 Attached to 0000:00:13.0 00:10:33.081 Attached to 0000:00:12.0 00:10:33.081 Initialization complete. Launching workers. 00:10:33.081 submit (in ns) avg, min, max = 17331.8, 13240.9, 60714.5 00:10:33.082 complete (in ns) avg, min, max = 12457.3, 8808.2, 134018.2 00:10:33.082 00:10:33.082 Submit histogram 00:10:33.082 ================ 00:10:33.082 Range in us Cumulative Count 00:10:33.082 13.207 - 13.265: 0.0235% ( 2) 00:10:33.082 13.265 - 13.324: 0.0705% ( 4) 00:10:33.082 13.324 - 13.382: 0.1763% ( 9) 00:10:33.082 13.382 - 13.440: 0.4584% ( 24) 00:10:33.082 13.440 - 13.498: 0.8346% ( 32) 00:10:33.082 13.498 - 13.556: 1.3283% ( 42) 00:10:33.082 13.556 - 13.615: 2.0924% ( 65) 00:10:33.082 13.615 - 13.673: 2.8917% ( 68) 00:10:33.082 13.673 - 13.731: 3.7381% ( 72) 00:10:33.082 13.731 - 13.789: 5.3485% ( 137) 00:10:33.082 13.789 - 13.847: 6.9472% ( 136) 00:10:33.082 13.847 - 13.905: 8.9691% ( 172) 00:10:33.082 13.905 - 13.964: 11.1555% ( 186) 00:10:33.082 13.964 - 14.022: 13.5418% ( 203) 00:10:33.082 14.022 - 14.080: 16.1867% ( 225) 00:10:33.082 14.080 - 14.138: 19.2547% ( 261) 00:10:33.082 14.138 - 14.196: 22.4521% ( 272) 00:10:33.082 14.196 - 14.255: 25.7435% ( 280) 00:10:33.082 14.255 - 14.313: 28.9644% ( 274) 00:10:33.082 14.313 - 14.371: 32.1853% ( 274) 00:10:33.082 14.371 - 14.429: 34.6891% ( 213) 00:10:33.082 14.429 - 14.487: 36.8402% ( 183) 00:10:33.082 14.487 - 14.545: 38.6270% ( 152) 00:10:33.082 14.545 - 14.604: 40.1081% ( 126) 00:10:33.082 14.604 - 14.662: 41.4600% ( 115) 00:10:33.082 14.662 - 14.720: 43.1997% ( 148) 00:10:33.082 14.720 - 14.778: 44.7866% ( 135) 00:10:33.082 14.778 - 14.836: 45.9269% ( 97) 00:10:33.082 14.836 - 14.895: 47.0554% ( 96) 00:10:33.082 14.895 - 15.011: 49.1713% ( 180) 00:10:33.082 15.011 - 15.127: 51.9102% ( 233) 00:10:33.082 15.127 - 15.244: 54.3435% ( 207) 00:10:33.082 15.244 - 15.360: 55.9069% ( 133) 00:10:33.082 15.360 - 15.476: 56.7650% ( 73) 00:10:33.082 15.476 - 15.593: 57.3880% ( 53) 00:10:33.082 15.593 - 15.709: 58.0228% ( 54) 00:10:33.082 15.709 - 15.825: 58.6223% ( 51) 00:10:33.082 15.825 - 15.942: 59.2571% ( 54) 00:10:33.082 15.942 - 16.058: 59.6803% ( 36) 00:10:33.082 16.058 - 16.175: 59.9389% ( 22) 00:10:33.082 16.175 - 16.291: 60.2445% ( 26) 00:10:33.082 16.291 - 16.407: 60.4091% ( 14) 00:10:33.082 16.407 - 16.524: 60.5736% ( 14) 00:10:33.082 16.524 - 16.640: 60.7265% ( 13) 00:10:33.082 16.640 - 16.756: 60.8793% ( 13) 00:10:33.082 16.756 - 16.873: 60.9733% ( 8) 00:10:33.082 16.873 - 16.989: 61.0438% ( 6) 00:10:33.082 16.989 - 17.105: 61.1379% ( 8) 00:10:33.082 17.105 - 17.222: 61.1967% ( 5) 00:10:33.082 17.222 - 17.338: 61.2437% ( 4) 00:10:33.082 17.338 - 17.455: 61.2672% ( 2) 00:10:33.082 17.455 - 17.571: 61.3377% ( 6) 00:10:33.082 17.571 - 17.687: 61.5493% ( 18) 00:10:33.082 17.687 - 17.804: 63.0892% ( 131) 00:10:33.082 17.804 - 17.920: 67.6737% ( 390) 00:10:33.082 17.920 - 18.036: 73.8333% ( 524) 00:10:33.082 18.036 - 18.153: 77.6184% ( 322) 00:10:33.082 18.153 - 18.269: 78.6529% ( 88) 00:10:33.082 18.269 - 18.385: 79.1701% ( 44) 00:10:33.082 18.385 - 18.502: 79.9107% ( 63) 00:10:33.082 18.502 - 18.618: 80.7100% ( 68) 00:10:33.082 18.618 - 18.735: 81.4976% ( 67) 00:10:33.082 18.735 - 18.851: 82.1794% ( 58) 00:10:33.082 18.851 - 18.967: 82.6848% ( 43) 00:10:33.082 18.967 - 19.084: 83.1198% ( 37) 00:10:33.082 19.084 - 19.200: 83.2726% ( 13) 00:10:33.082 19.200 - 19.316: 83.5900% ( 27) 00:10:33.082 19.316 - 19.433: 83.7781% ( 16) 00:10:33.082 19.433 - 19.549: 84.0602% ( 24) 00:10:33.082 19.549 - 19.665: 84.2835% ( 19) 00:10:33.082 19.665 - 19.782: 84.5421% ( 22) 00:10:33.082 19.782 - 19.898: 84.6714% ( 11) 00:10:33.082 19.898 - 20.015: 84.8595% ( 16) 00:10:33.082 20.015 - 20.131: 85.1064% ( 21) 00:10:33.082 20.131 - 20.247: 85.2592% ( 13) 00:10:33.082 20.247 - 20.364: 85.4238% ( 14) 00:10:33.082 20.364 - 20.480: 85.5413% ( 10) 00:10:33.082 20.480 - 20.596: 85.6589% ( 10) 00:10:33.082 20.596 - 20.713: 85.7529% ( 8) 00:10:33.082 20.713 - 20.829: 85.9175% ( 14) 00:10:33.082 20.829 - 20.945: 86.1761% ( 22) 00:10:33.082 20.945 - 21.062: 86.3407% ( 14) 00:10:33.082 21.062 - 21.178: 86.4935% ( 13) 00:10:33.082 21.178 - 21.295: 86.5758% ( 7) 00:10:33.082 21.295 - 21.411: 86.7521% ( 15) 00:10:33.082 21.411 - 21.527: 86.9637% ( 18) 00:10:33.082 21.527 - 21.644: 87.0577% ( 8) 00:10:33.082 21.644 - 21.760: 87.2340% ( 15) 00:10:33.082 21.760 - 21.876: 87.3633% ( 11) 00:10:33.082 21.876 - 21.993: 87.5514% ( 16) 00:10:33.082 21.993 - 22.109: 87.7042% ( 13) 00:10:33.082 22.109 - 22.225: 87.7983% ( 8) 00:10:33.082 22.225 - 22.342: 87.8923% ( 8) 00:10:33.082 22.342 - 22.458: 88.0099% ( 10) 00:10:33.082 22.458 - 22.575: 88.1274% ( 10) 00:10:33.082 22.575 - 22.691: 88.1744% ( 4) 00:10:33.082 22.691 - 22.807: 88.2685% ( 8) 00:10:33.082 22.807 - 22.924: 88.3508% ( 7) 00:10:33.082 22.924 - 23.040: 88.4801% ( 11) 00:10:33.082 23.040 - 23.156: 88.5506% ( 6) 00:10:33.082 23.156 - 23.273: 88.6094% ( 5) 00:10:33.082 23.273 - 23.389: 88.7034% ( 8) 00:10:33.082 23.389 - 23.505: 88.7504% ( 4) 00:10:33.082 23.505 - 23.622: 88.8092% ( 5) 00:10:33.082 23.622 - 23.738: 88.8797% ( 6) 00:10:33.082 23.738 - 23.855: 88.9385% ( 5) 00:10:33.082 23.855 - 23.971: 89.0091% ( 6) 00:10:33.082 23.971 - 24.087: 89.1031% ( 8) 00:10:33.082 24.087 - 24.204: 89.1971% ( 8) 00:10:33.082 24.204 - 24.320: 89.2912% ( 8) 00:10:33.082 24.320 - 24.436: 89.3499% ( 5) 00:10:33.082 24.436 - 24.553: 89.4557% ( 9) 00:10:33.082 24.553 - 24.669: 89.5263% ( 6) 00:10:33.082 24.669 - 24.785: 89.6438% ( 10) 00:10:33.082 24.785 - 24.902: 89.8201% ( 15) 00:10:33.082 24.902 - 25.018: 89.9142% ( 8) 00:10:33.082 25.018 - 25.135: 89.9847% ( 6) 00:10:33.082 25.135 - 25.251: 90.0788% ( 8) 00:10:33.082 25.251 - 25.367: 90.1375% ( 5) 00:10:33.082 25.367 - 25.484: 90.2786% ( 12) 00:10:33.082 25.484 - 25.600: 90.3609% ( 7) 00:10:33.082 25.600 - 25.716: 90.4667% ( 9) 00:10:33.082 25.716 - 25.833: 90.5960% ( 11) 00:10:33.082 25.833 - 25.949: 90.6665% ( 6) 00:10:33.082 25.949 - 26.065: 90.8076% ( 12) 00:10:33.082 26.065 - 26.182: 90.8781% ( 6) 00:10:33.082 26.182 - 26.298: 90.9251% ( 4) 00:10:33.082 26.298 - 26.415: 91.0192% ( 8) 00:10:33.082 26.415 - 26.531: 91.0309% ( 1) 00:10:33.082 26.531 - 26.647: 91.0662% ( 3) 00:10:33.082 26.647 - 26.764: 91.1250% ( 5) 00:10:33.082 26.764 - 26.880: 91.1602% ( 3) 00:10:33.082 26.880 - 26.996: 91.2072% ( 4) 00:10:33.082 26.996 - 27.113: 91.2778% ( 6) 00:10:33.082 27.113 - 27.229: 91.3953% ( 10) 00:10:33.082 27.229 - 27.345: 91.5129% ( 10) 00:10:33.082 27.345 - 27.462: 91.6187% ( 9) 00:10:33.082 27.462 - 27.578: 91.6657% ( 4) 00:10:33.082 27.578 - 27.695: 91.7480% ( 7) 00:10:33.082 27.695 - 27.811: 91.8067% ( 5) 00:10:33.082 27.811 - 27.927: 91.9243% ( 10) 00:10:33.082 27.927 - 28.044: 92.0066% ( 7) 00:10:33.082 28.044 - 28.160: 92.1241% ( 10) 00:10:33.082 28.160 - 28.276: 92.2652% ( 12) 00:10:33.082 28.276 - 28.393: 92.4415% ( 15) 00:10:33.082 28.393 - 28.509: 92.6414% ( 17) 00:10:33.082 28.509 - 28.625: 92.9940% ( 30) 00:10:33.082 28.625 - 28.742: 93.4289% ( 37) 00:10:33.082 28.742 - 28.858: 94.0755% ( 55) 00:10:33.082 28.858 - 28.975: 94.6162% ( 46) 00:10:33.082 28.975 - 29.091: 95.2745% ( 56) 00:10:33.082 29.091 - 29.207: 95.7094% ( 37) 00:10:33.082 29.207 - 29.324: 96.2972% ( 50) 00:10:33.082 29.324 - 29.440: 96.6263% ( 28) 00:10:33.082 29.440 - 29.556: 96.9084% ( 24) 00:10:33.082 29.556 - 29.673: 97.1788% ( 23) 00:10:33.082 29.673 - 29.789: 97.4492% ( 23) 00:10:33.082 29.789 - 30.022: 97.8136% ( 31) 00:10:33.082 30.022 - 30.255: 98.0134% ( 17) 00:10:33.082 30.255 - 30.487: 98.2132% ( 17) 00:10:33.082 30.487 - 30.720: 98.3190% ( 9) 00:10:33.082 30.720 - 30.953: 98.4131% ( 8) 00:10:33.082 30.953 - 31.185: 98.5306% ( 10) 00:10:33.082 31.185 - 31.418: 98.6364% ( 9) 00:10:33.082 31.418 - 31.651: 98.6717% ( 3) 00:10:33.082 31.651 - 31.884: 98.6834% ( 1) 00:10:33.082 31.884 - 32.116: 98.7069% ( 2) 00:10:33.082 32.116 - 32.349: 98.7187% ( 1) 00:10:33.082 32.349 - 32.582: 98.7305% ( 1) 00:10:33.082 32.582 - 32.815: 98.7422% ( 1) 00:10:33.082 32.815 - 33.047: 98.7540% ( 1) 00:10:33.082 33.280 - 33.513: 98.7657% ( 1) 00:10:33.082 33.745 - 33.978: 98.8127% ( 4) 00:10:33.082 34.211 - 34.444: 98.8363% ( 2) 00:10:33.082 34.444 - 34.676: 98.8833% ( 4) 00:10:33.082 34.676 - 34.909: 98.9068% ( 2) 00:10:33.082 34.909 - 35.142: 98.9538% ( 4) 00:10:33.082 35.142 - 35.375: 98.9773% ( 2) 00:10:33.082 35.375 - 35.607: 98.9891% ( 1) 00:10:33.082 35.607 - 35.840: 99.0126% ( 2) 00:10:33.082 35.840 - 36.073: 99.0361% ( 2) 00:10:33.082 36.073 - 36.305: 99.0831% ( 4) 00:10:33.082 36.305 - 36.538: 99.1301% ( 4) 00:10:33.082 36.538 - 36.771: 99.1889% ( 5) 00:10:33.082 36.771 - 37.004: 99.2359% ( 4) 00:10:33.082 37.004 - 37.236: 99.2712% ( 3) 00:10:33.082 37.236 - 37.469: 99.3182% ( 4) 00:10:33.082 37.469 - 37.702: 99.3887% ( 6) 00:10:33.082 37.935 - 38.167: 99.4358% ( 4) 00:10:33.083 38.865 - 39.098: 99.4593% ( 2) 00:10:33.083 39.098 - 39.331: 99.4710% ( 1) 00:10:33.083 39.331 - 39.564: 99.4945% ( 2) 00:10:33.083 40.029 - 40.262: 99.5063% ( 1) 00:10:33.083 40.262 - 40.495: 99.5180% ( 1) 00:10:33.083 40.495 - 40.727: 99.5298% ( 1) 00:10:33.083 40.727 - 40.960: 99.5416% ( 1) 00:10:33.083 40.960 - 41.193: 99.5768% ( 3) 00:10:33.083 41.193 - 41.425: 99.5886% ( 1) 00:10:33.083 41.658 - 41.891: 99.6003% ( 1) 00:10:33.083 42.356 - 42.589: 99.6121% ( 1) 00:10:33.083 42.589 - 42.822: 99.6356% ( 2) 00:10:33.083 43.753 - 43.985: 99.6473% ( 1) 00:10:33.083 43.985 - 44.218: 99.6826% ( 3) 00:10:33.083 44.218 - 44.451: 99.7061% ( 2) 00:10:33.083 44.684 - 44.916: 99.7414% ( 3) 00:10:33.083 44.916 - 45.149: 99.7649% ( 2) 00:10:33.083 45.149 - 45.382: 99.7767% ( 1) 00:10:33.083 45.382 - 45.615: 99.7884% ( 1) 00:10:33.083 45.615 - 45.847: 99.8002% ( 1) 00:10:33.083 46.313 - 46.545: 99.8119% ( 1) 00:10:33.083 46.778 - 47.011: 99.8472% ( 3) 00:10:33.083 47.476 - 47.709: 99.8589% ( 1) 00:10:33.083 50.036 - 50.269: 99.8707% ( 1) 00:10:33.083 51.200 - 51.433: 99.8824% ( 1) 00:10:33.083 51.665 - 51.898: 99.8942% ( 1) 00:10:33.083 52.131 - 52.364: 99.9060% ( 1) 00:10:33.083 52.829 - 53.062: 99.9177% ( 1) 00:10:33.083 54.458 - 54.691: 99.9295% ( 1) 00:10:33.083 55.389 - 55.622: 99.9412% ( 1) 00:10:33.083 57.018 - 57.251: 99.9647% ( 2) 00:10:33.083 57.716 - 57.949: 99.9765% ( 1) 00:10:33.083 59.345 - 59.578: 99.9882% ( 1) 00:10:33.083 60.509 - 60.975: 100.0000% ( 1) 00:10:33.083 00:10:33.083 Complete histogram 00:10:33.083 ================== 00:10:33.083 Range in us Cumulative Count 00:10:33.083 8.785 - 8.844: 0.0235% ( 2) 00:10:33.083 8.844 - 8.902: 0.0353% ( 1) 00:10:33.083 8.902 - 8.960: 0.2469% ( 18) 00:10:33.083 8.960 - 9.018: 0.3997% ( 13) 00:10:33.083 9.018 - 9.076: 0.6348% ( 20) 00:10:33.083 9.076 - 9.135: 1.0932% ( 39) 00:10:33.083 9.135 - 9.193: 2.1041% ( 86) 00:10:33.083 9.193 - 9.251: 3.7616% ( 141) 00:10:33.083 9.251 - 9.309: 5.3250% ( 133) 00:10:33.083 9.309 - 9.367: 6.7709% ( 123) 00:10:33.083 9.367 - 9.425: 9.3452% ( 219) 00:10:33.083 9.425 - 9.484: 12.9893% ( 310) 00:10:33.083 9.484 - 9.542: 18.0557% ( 431) 00:10:33.083 9.542 - 9.600: 22.3345% ( 364) 00:10:33.083 9.600 - 9.658: 25.8493% ( 299) 00:10:33.083 9.658 - 9.716: 28.7058% ( 243) 00:10:33.083 9.716 - 9.775: 31.9266% ( 274) 00:10:33.083 9.775 - 9.833: 35.3944% ( 295) 00:10:33.083 9.833 - 9.891: 37.9570% ( 218) 00:10:33.083 9.891 - 9.949: 39.6732% ( 146) 00:10:33.083 9.949 - 10.007: 41.1073% ( 122) 00:10:33.083 10.007 - 10.065: 42.8236% ( 146) 00:10:33.083 10.065 - 10.124: 45.5155% ( 229) 00:10:33.083 10.124 - 10.182: 48.3719% ( 243) 00:10:33.083 10.182 - 10.240: 50.3938% ( 172) 00:10:33.083 10.240 - 10.298: 51.8984% ( 128) 00:10:33.083 10.298 - 10.356: 53.1327% ( 105) 00:10:33.083 10.356 - 10.415: 53.9908% ( 73) 00:10:33.083 10.415 - 10.473: 54.9665% ( 83) 00:10:33.083 10.473 - 10.531: 55.8246% ( 73) 00:10:33.083 10.531 - 10.589: 56.3183% ( 42) 00:10:33.083 10.589 - 10.647: 56.7180% ( 34) 00:10:33.083 10.647 - 10.705: 56.9649% ( 21) 00:10:33.083 10.705 - 10.764: 57.1764% ( 18) 00:10:33.083 10.764 - 10.822: 57.4351% ( 22) 00:10:33.083 10.822 - 10.880: 57.6466% ( 18) 00:10:33.083 10.880 - 10.938: 57.7759% ( 11) 00:10:33.083 10.938 - 10.996: 57.9758% ( 17) 00:10:33.083 10.996 - 11.055: 58.3284% ( 30) 00:10:33.083 11.055 - 11.113: 58.5635% ( 20) 00:10:33.083 11.113 - 11.171: 58.8104% ( 21) 00:10:33.083 11.171 - 11.229: 58.9867% ( 15) 00:10:33.083 11.229 - 11.287: 59.1160% ( 11) 00:10:33.083 11.287 - 11.345: 59.2218% ( 9) 00:10:33.083 11.345 - 11.404: 59.3394% ( 10) 00:10:33.083 11.404 - 11.462: 59.3511% ( 1) 00:10:33.083 11.462 - 11.520: 59.3864% ( 3) 00:10:33.083 11.520 - 11.578: 59.4334% ( 4) 00:10:33.083 11.578 - 11.636: 59.4569% ( 2) 00:10:33.083 11.636 - 11.695: 59.4922% ( 3) 00:10:33.083 11.695 - 11.753: 59.5157% ( 2) 00:10:33.083 11.753 - 11.811: 59.5510% ( 3) 00:10:33.083 11.811 - 11.869: 59.6215% ( 6) 00:10:33.083 11.869 - 11.927: 59.6685% ( 4) 00:10:33.083 11.927 - 11.985: 59.7390% ( 6) 00:10:33.083 11.985 - 12.044: 59.8096% ( 6) 00:10:33.083 12.044 - 12.102: 60.1975% ( 33) 00:10:33.083 12.102 - 12.160: 62.0313% ( 156) 00:10:33.083 12.160 - 12.218: 66.2043% ( 355) 00:10:33.083 12.218 - 12.276: 71.1061% ( 417) 00:10:33.083 12.276 - 12.335: 74.9853% ( 330) 00:10:33.083 12.335 - 12.393: 76.9014% ( 163) 00:10:33.083 12.393 - 12.451: 78.0769% ( 100) 00:10:33.083 12.451 - 12.509: 78.9232% ( 72) 00:10:33.083 12.509 - 12.567: 79.4640% ( 46) 00:10:33.083 12.567 - 12.625: 79.7578% ( 25) 00:10:33.083 12.625 - 12.684: 79.8872% ( 11) 00:10:33.083 12.684 - 12.742: 80.0165% ( 11) 00:10:33.083 12.742 - 12.800: 80.0870% ( 6) 00:10:33.083 12.800 - 12.858: 80.1458% ( 5) 00:10:33.083 12.858 - 12.916: 80.2280% ( 7) 00:10:33.083 12.916 - 12.975: 80.4279% ( 17) 00:10:33.083 12.975 - 13.033: 80.6277% ( 17) 00:10:33.083 13.033 - 13.091: 80.9451% ( 27) 00:10:33.083 13.091 - 13.149: 81.2390% ( 25) 00:10:33.083 13.149 - 13.207: 81.6739% ( 37) 00:10:33.083 13.207 - 13.265: 82.1089% ( 37) 00:10:33.083 13.265 - 13.324: 82.4145% ( 26) 00:10:33.083 13.324 - 13.382: 82.7554% ( 29) 00:10:33.083 13.382 - 13.440: 82.9670% ( 18) 00:10:33.083 13.440 - 13.498: 83.1903% ( 19) 00:10:33.083 13.498 - 13.556: 83.3549% ( 14) 00:10:33.083 13.556 - 13.615: 83.4489% ( 8) 00:10:33.083 13.615 - 13.673: 83.5195% ( 6) 00:10:33.083 13.673 - 13.731: 83.6958% ( 15) 00:10:33.083 13.731 - 13.789: 83.7310% ( 3) 00:10:33.083 13.789 - 13.847: 83.7663% ( 3) 00:10:33.083 13.847 - 13.905: 83.8486% ( 7) 00:10:33.083 13.905 - 13.964: 83.8721% ( 2) 00:10:33.083 13.964 - 14.022: 83.8956% ( 2) 00:10:33.083 14.022 - 14.080: 83.9544% ( 5) 00:10:33.083 14.080 - 14.138: 84.0014% ( 4) 00:10:33.083 14.138 - 14.196: 84.0249% ( 2) 00:10:33.083 14.196 - 14.255: 84.0719% ( 4) 00:10:33.083 14.255 - 14.313: 84.0955% ( 2) 00:10:33.083 14.313 - 14.371: 84.1190% ( 2) 00:10:33.083 14.371 - 14.429: 84.1777% ( 5) 00:10:33.083 14.429 - 14.487: 84.2012% ( 2) 00:10:33.083 14.487 - 14.545: 84.2483% ( 4) 00:10:33.083 14.545 - 14.604: 84.2718% ( 2) 00:10:33.083 14.604 - 14.662: 84.3306% ( 5) 00:10:33.083 14.662 - 14.720: 84.3541% ( 2) 00:10:33.083 14.720 - 14.778: 84.3893% ( 3) 00:10:33.083 14.778 - 14.836: 84.4481% ( 5) 00:10:33.083 14.836 - 14.895: 84.4834% ( 3) 00:10:33.083 14.895 - 15.011: 84.6127% ( 11) 00:10:33.083 15.011 - 15.127: 84.7655% ( 13) 00:10:33.083 15.127 - 15.244: 84.8595% ( 8) 00:10:33.083 15.244 - 15.360: 85.0359% ( 15) 00:10:33.083 15.360 - 15.476: 85.1769% ( 12) 00:10:33.083 15.476 - 15.593: 85.4003% ( 19) 00:10:33.083 15.593 - 15.709: 85.5531% ( 13) 00:10:33.083 15.709 - 15.825: 85.6941% ( 12) 00:10:33.083 15.825 - 15.942: 85.8587% ( 14) 00:10:33.083 15.942 - 16.058: 86.0703% ( 18) 00:10:33.083 16.058 - 16.175: 86.2114% ( 12) 00:10:33.083 16.175 - 16.291: 86.4229% ( 18) 00:10:33.083 16.291 - 16.407: 86.5523% ( 11) 00:10:33.083 16.407 - 16.524: 86.7286% ( 15) 00:10:33.083 16.524 - 16.640: 86.9167% ( 16) 00:10:33.083 16.640 - 16.756: 87.0342% ( 10) 00:10:33.083 16.756 - 16.873: 87.1282% ( 8) 00:10:33.083 16.873 - 16.989: 87.3281% ( 17) 00:10:33.083 16.989 - 17.105: 87.4104% ( 7) 00:10:33.083 17.105 - 17.222: 87.5514% ( 12) 00:10:33.083 17.222 - 17.338: 87.7513% ( 17) 00:10:33.083 17.338 - 17.455: 87.8453% ( 8) 00:10:33.083 17.455 - 17.571: 87.9276% ( 7) 00:10:33.083 17.571 - 17.687: 87.9981% ( 6) 00:10:33.083 17.687 - 17.804: 88.0686% ( 6) 00:10:33.083 17.804 - 17.920: 88.1744% ( 9) 00:10:33.083 17.920 - 18.036: 88.2802% ( 9) 00:10:33.083 18.036 - 18.153: 88.3390% ( 5) 00:10:33.083 18.153 - 18.269: 88.4095% ( 6) 00:10:33.083 18.269 - 18.385: 88.4566% ( 4) 00:10:33.083 18.385 - 18.502: 88.5036% ( 4) 00:10:33.083 18.502 - 18.618: 88.5859% ( 7) 00:10:33.083 18.618 - 18.735: 88.6329% ( 4) 00:10:33.083 18.735 - 18.851: 88.7152% ( 7) 00:10:33.083 18.851 - 18.967: 88.7622% ( 4) 00:10:33.083 18.967 - 19.084: 88.8210% ( 5) 00:10:33.083 19.084 - 19.200: 88.8680% ( 4) 00:10:33.083 19.200 - 19.316: 88.9503% ( 7) 00:10:33.083 19.316 - 19.433: 88.9973% ( 4) 00:10:33.083 19.433 - 19.549: 89.0326% ( 3) 00:10:33.083 19.549 - 19.665: 89.0796% ( 4) 00:10:33.083 19.665 - 19.782: 89.1266% ( 4) 00:10:33.083 19.782 - 19.898: 89.1736% ( 4) 00:10:33.083 19.898 - 20.015: 89.2324% ( 5) 00:10:33.083 20.015 - 20.131: 89.2794% ( 4) 00:10:33.083 20.131 - 20.247: 89.3147% ( 3) 00:10:33.083 20.247 - 20.364: 89.3735% ( 5) 00:10:33.083 20.364 - 20.480: 89.5145% ( 12) 00:10:33.083 20.480 - 20.596: 89.5968% ( 7) 00:10:33.083 20.596 - 20.713: 89.6556% ( 5) 00:10:33.083 20.713 - 20.829: 89.7379% ( 7) 00:10:33.083 20.829 - 20.945: 89.8437% ( 9) 00:10:33.083 20.945 - 21.062: 89.8789% ( 3) 00:10:33.083 21.062 - 21.178: 90.0317% ( 13) 00:10:33.083 21.178 - 21.295: 90.2198% ( 16) 00:10:33.084 21.295 - 21.411: 90.3139% ( 8) 00:10:33.084 21.411 - 21.527: 90.3961% ( 7) 00:10:33.084 21.527 - 21.644: 90.5137% ( 10) 00:10:33.084 21.644 - 21.760: 90.5960% ( 7) 00:10:33.084 21.760 - 21.876: 90.6900% ( 8) 00:10:33.084 21.876 - 21.993: 90.7488% ( 5) 00:10:33.084 21.993 - 22.109: 90.8899% ( 12) 00:10:33.084 22.109 - 22.225: 91.0309% ( 12) 00:10:33.084 22.225 - 22.342: 91.1132% ( 7) 00:10:33.084 22.342 - 22.458: 91.1602% ( 4) 00:10:33.084 22.458 - 22.575: 91.1955% ( 3) 00:10:33.084 22.575 - 22.691: 91.2543% ( 5) 00:10:33.084 22.691 - 22.807: 91.3836% ( 11) 00:10:33.084 22.807 - 22.924: 91.4306% ( 4) 00:10:33.084 22.924 - 23.040: 91.4659% ( 3) 00:10:33.084 23.040 - 23.156: 91.5129% ( 4) 00:10:33.084 23.156 - 23.273: 91.5481% ( 3) 00:10:33.084 23.273 - 23.389: 91.5716% ( 2) 00:10:33.084 23.389 - 23.505: 91.6657% ( 8) 00:10:33.084 23.505 - 23.622: 91.7245% ( 5) 00:10:33.084 23.622 - 23.738: 91.8538% ( 11) 00:10:33.084 23.738 - 23.855: 92.1241% ( 23) 00:10:33.084 23.855 - 23.971: 92.5708% ( 38) 00:10:33.084 23.971 - 24.087: 93.0880% ( 44) 00:10:33.084 24.087 - 24.204: 93.7581% ( 57) 00:10:33.084 24.204 - 24.320: 94.3693% ( 52) 00:10:33.084 24.320 - 24.436: 95.1099% ( 63) 00:10:33.084 24.436 - 24.553: 95.7682% ( 56) 00:10:33.084 24.553 - 24.669: 96.2149% ( 38) 00:10:33.084 24.669 - 24.785: 96.8261% ( 52) 00:10:33.084 24.785 - 24.902: 97.1905% ( 31) 00:10:33.084 24.902 - 25.018: 97.4609% ( 23) 00:10:33.084 25.018 - 25.135: 97.6607% ( 17) 00:10:33.084 25.135 - 25.251: 97.8959% ( 20) 00:10:33.084 25.251 - 25.367: 98.0016% ( 9) 00:10:33.084 25.367 - 25.484: 98.1310% ( 11) 00:10:33.084 25.484 - 25.600: 98.2603% ( 11) 00:10:33.084 25.600 - 25.716: 98.3308% ( 6) 00:10:33.084 25.716 - 25.833: 98.4013% ( 6) 00:10:33.084 25.833 - 25.949: 98.4248% ( 2) 00:10:33.084 25.949 - 26.065: 98.4601% ( 3) 00:10:33.084 26.065 - 26.182: 98.5189% ( 5) 00:10:33.084 26.298 - 26.415: 98.6012% ( 7) 00:10:33.084 26.415 - 26.531: 98.6717% ( 6) 00:10:33.084 26.531 - 26.647: 98.6952% ( 2) 00:10:33.084 26.647 - 26.764: 98.7305% ( 3) 00:10:33.084 26.880 - 26.996: 98.7422% ( 1) 00:10:33.084 26.996 - 27.113: 98.7540% ( 1) 00:10:33.084 27.113 - 27.229: 98.7657% ( 1) 00:10:33.084 27.578 - 27.695: 98.7775% ( 1) 00:10:33.084 27.695 - 27.811: 98.7892% ( 1) 00:10:33.084 28.509 - 28.625: 98.8010% ( 1) 00:10:33.084 28.742 - 28.858: 98.8480% ( 4) 00:10:33.084 28.858 - 28.975: 98.8598% ( 1) 00:10:33.084 28.975 - 29.091: 98.8715% ( 1) 00:10:33.084 29.324 - 29.440: 98.9185% ( 4) 00:10:33.084 29.440 - 29.556: 98.9303% ( 1) 00:10:33.084 29.556 - 29.673: 98.9656% ( 3) 00:10:33.084 29.673 - 29.789: 98.9891% ( 2) 00:10:33.084 29.789 - 30.022: 99.0596% ( 6) 00:10:33.084 30.022 - 30.255: 99.1654% ( 9) 00:10:33.084 30.255 - 30.487: 99.2712% ( 9) 00:10:33.084 30.487 - 30.720: 99.3652% ( 8) 00:10:33.084 30.720 - 30.953: 99.4005% ( 3) 00:10:33.084 30.953 - 31.185: 99.4475% ( 4) 00:10:33.084 31.185 - 31.418: 99.4593% ( 1) 00:10:33.084 31.418 - 31.651: 99.4710% ( 1) 00:10:33.084 31.651 - 31.884: 99.4945% ( 2) 00:10:33.084 31.884 - 32.116: 99.5768% ( 7) 00:10:33.084 32.116 - 32.349: 99.5886% ( 1) 00:10:33.084 32.349 - 32.582: 99.6003% ( 1) 00:10:33.084 32.582 - 32.815: 99.6709% ( 6) 00:10:33.084 32.815 - 33.047: 99.7061% ( 3) 00:10:33.084 33.280 - 33.513: 99.7179% ( 1) 00:10:33.084 33.745 - 33.978: 99.7414% ( 2) 00:10:33.084 33.978 - 34.211: 99.7531% ( 1) 00:10:33.084 34.211 - 34.444: 99.7649% ( 1) 00:10:33.084 34.444 - 34.676: 99.7767% ( 1) 00:10:33.084 34.676 - 34.909: 99.7884% ( 1) 00:10:33.084 34.909 - 35.142: 99.8002% ( 1) 00:10:33.084 35.142 - 35.375: 99.8119% ( 1) 00:10:33.084 35.607 - 35.840: 99.8237% ( 1) 00:10:33.084 35.840 - 36.073: 99.8354% ( 1) 00:10:33.084 36.305 - 36.538: 99.8472% ( 1) 00:10:33.084 38.633 - 38.865: 99.8589% ( 1) 00:10:33.084 39.331 - 39.564: 99.8824% ( 2) 00:10:33.084 39.564 - 39.796: 99.8942% ( 1) 00:10:33.084 40.262 - 40.495: 99.9060% ( 1) 00:10:33.084 41.193 - 41.425: 99.9177% ( 1) 00:10:33.084 50.269 - 50.502: 99.9295% ( 1) 00:10:33.084 51.433 - 51.665: 99.9412% ( 1) 00:10:33.084 51.665 - 51.898: 99.9530% ( 1) 00:10:33.084 52.829 - 53.062: 99.9647% ( 1) 00:10:33.084 69.818 - 70.284: 99.9765% ( 1) 00:10:33.084 89.367 - 89.833: 99.9882% ( 1) 00:10:33.084 133.120 - 134.051: 100.0000% ( 1) 00:10:33.084 00:10:33.084 00:10:33.084 real 0m1.287s 00:10:33.084 user 0m1.095s 00:10:33.084 sys 0m0.144s 00:10:33.084 21:09:44 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:33.084 21:09:44 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:10:33.084 ************************************ 00:10:33.084 END TEST nvme_overhead 00:10:33.084 ************************************ 00:10:33.084 21:09:44 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:33.084 21:09:44 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:33.084 21:09:44 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:33.084 21:09:44 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.084 21:09:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:33.084 ************************************ 00:10:33.084 START TEST nvme_arbitration 00:10:33.084 ************************************ 00:10:33.084 21:09:44 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:36.369 Initializing NVMe Controllers 00:10:36.369 Attached to 0000:00:10.0 00:10:36.369 Attached to 0000:00:11.0 00:10:36.369 Attached to 0000:00:13.0 00:10:36.369 Attached to 0000:00:12.0 00:10:36.369 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:10:36.369 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:10:36.369 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:10:36.369 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:10:36.369 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:10:36.369 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:10:36.369 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:10:36.369 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:10:36.369 Initialization complete. Launching workers. 00:10:36.369 Starting thread on core 1 with urgent priority queue 00:10:36.369 Starting thread on core 2 with urgent priority queue 00:10:36.369 Starting thread on core 3 with urgent priority queue 00:10:36.369 Starting thread on core 0 with urgent priority queue 00:10:36.369 QEMU NVMe Ctrl (12340 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:10:36.369 QEMU NVMe Ctrl (12342 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:10:36.369 QEMU NVMe Ctrl (12341 ) core 1: 640.00 IO/s 156.25 secs/100000 ios 00:10:36.369 QEMU NVMe Ctrl (12342 ) core 1: 640.00 IO/s 156.25 secs/100000 ios 00:10:36.369 QEMU NVMe Ctrl (12343 ) core 2: 725.33 IO/s 137.87 secs/100000 ios 00:10:36.369 QEMU NVMe Ctrl (12342 ) core 3: 682.67 IO/s 146.48 secs/100000 ios 00:10:36.369 ======================================================== 00:10:36.369 00:10:36.369 00:10:36.369 real 0m3.403s 00:10:36.369 user 0m9.329s 00:10:36.369 sys 0m0.159s 00:10:36.369 21:09:47 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:36.369 21:09:47 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:10:36.369 ************************************ 00:10:36.369 END TEST nvme_arbitration 00:10:36.369 ************************************ 00:10:36.369 21:09:47 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:36.369 21:09:47 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:36.369 21:09:47 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:36.369 21:09:47 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.369 21:09:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:36.369 ************************************ 00:10:36.369 START TEST nvme_single_aen 00:10:36.369 ************************************ 00:10:36.369 21:09:47 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:36.937 Asynchronous Event Request test 00:10:36.937 Attached to 0000:00:10.0 00:10:36.937 Attached to 0000:00:11.0 00:10:36.937 Attached to 0000:00:13.0 00:10:36.937 Attached to 0000:00:12.0 00:10:36.937 Reset controller to setup AER completions for this process 00:10:36.937 Registering asynchronous event callbacks... 00:10:36.937 Getting orig temperature thresholds of all controllers 00:10:36.937 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:36.937 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:36.937 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:36.937 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:36.937 Setting all controllers temperature threshold low to trigger AER 00:10:36.937 Waiting for all controllers temperature threshold to be set lower 00:10:36.937 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:36.937 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:36.937 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:36.937 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:36.937 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:36.937 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:36.937 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:36.937 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:36.937 Waiting for all controllers to trigger AER and reset threshold 00:10:36.937 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:36.937 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:36.937 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:36.937 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:36.937 Cleaning up... 00:10:36.937 00:10:36.937 real 0m0.289s 00:10:36.937 user 0m0.103s 00:10:36.937 sys 0m0.139s 00:10:36.937 21:09:48 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:36.937 21:09:48 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:10:36.937 ************************************ 00:10:36.937 END TEST nvme_single_aen 00:10:36.937 ************************************ 00:10:36.937 21:09:48 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:36.938 21:09:48 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:10:36.938 21:09:48 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:36.938 21:09:48 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.938 21:09:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:36.938 ************************************ 00:10:36.938 START TEST nvme_doorbell_aers 00:10:36.938 ************************************ 00:10:36.938 21:09:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:10:36.938 21:09:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:10:36.938 21:09:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:10:36.938 21:09:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:10:36.938 21:09:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:10:36.938 21:09:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:10:36.938 21:09:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:10:36.938 21:09:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:36.938 21:09:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:36.938 21:09:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:10:36.938 21:09:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:10:36.938 21:09:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:36.938 21:09:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:36.938 21:09:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:37.197 [2024-07-14 21:09:48.540316] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69495) is not found. Dropping the request. 00:10:47.169 Executing: test_write_invalid_db 00:10:47.169 Waiting for AER completion... 00:10:47.169 Failure: test_write_invalid_db 00:10:47.169 00:10:47.169 Executing: test_invalid_db_write_overflow_sq 00:10:47.169 Waiting for AER completion... 00:10:47.169 Failure: test_invalid_db_write_overflow_sq 00:10:47.169 00:10:47.169 Executing: test_invalid_db_write_overflow_cq 00:10:47.169 Waiting for AER completion... 00:10:47.169 Failure: test_invalid_db_write_overflow_cq 00:10:47.169 00:10:47.169 21:09:58 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:47.169 21:09:58 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:47.169 [2024-07-14 21:09:58.644209] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69495) is not found. Dropping the request. 00:10:57.173 Executing: test_write_invalid_db 00:10:57.173 Waiting for AER completion... 00:10:57.173 Failure: test_write_invalid_db 00:10:57.173 00:10:57.173 Executing: test_invalid_db_write_overflow_sq 00:10:57.173 Waiting for AER completion... 00:10:57.173 Failure: test_invalid_db_write_overflow_sq 00:10:57.173 00:10:57.173 Executing: test_invalid_db_write_overflow_cq 00:10:57.173 Waiting for AER completion... 00:10:57.173 Failure: test_invalid_db_write_overflow_cq 00:10:57.173 00:10:57.173 21:10:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:57.173 21:10:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:57.173 [2024-07-14 21:10:08.676846] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69495) is not found. Dropping the request. 00:11:07.147 Executing: test_write_invalid_db 00:11:07.147 Waiting for AER completion... 00:11:07.147 Failure: test_write_invalid_db 00:11:07.147 00:11:07.147 Executing: test_invalid_db_write_overflow_sq 00:11:07.147 Waiting for AER completion... 00:11:07.147 Failure: test_invalid_db_write_overflow_sq 00:11:07.147 00:11:07.147 Executing: test_invalid_db_write_overflow_cq 00:11:07.147 Waiting for AER completion... 00:11:07.147 Failure: test_invalid_db_write_overflow_cq 00:11:07.147 00:11:07.147 21:10:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:07.147 21:10:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:07.405 [2024-07-14 21:10:18.732242] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69495) is not found. Dropping the request. 00:11:17.381 Executing: test_write_invalid_db 00:11:17.381 Waiting for AER completion... 00:11:17.381 Failure: test_write_invalid_db 00:11:17.381 00:11:17.381 Executing: test_invalid_db_write_overflow_sq 00:11:17.381 Waiting for AER completion... 00:11:17.381 Failure: test_invalid_db_write_overflow_sq 00:11:17.381 00:11:17.381 Executing: test_invalid_db_write_overflow_cq 00:11:17.381 Waiting for AER completion... 00:11:17.381 Failure: test_invalid_db_write_overflow_cq 00:11:17.381 00:11:17.381 00:11:17.381 real 0m40.248s 00:11:17.381 user 0m34.204s 00:11:17.381 sys 0m5.680s 00:11:17.381 21:10:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:17.381 21:10:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:11:17.381 ************************************ 00:11:17.381 END TEST nvme_doorbell_aers 00:11:17.381 ************************************ 00:11:17.381 21:10:28 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:17.381 21:10:28 nvme -- nvme/nvme.sh@97 -- # uname 00:11:17.381 21:10:28 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:11:17.381 21:10:28 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:17.381 21:10:28 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:11:17.381 21:10:28 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.381 21:10:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:17.381 ************************************ 00:11:17.381 START TEST nvme_multi_aen 00:11:17.381 ************************************ 00:11:17.381 21:10:28 nvme.nvme_multi_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:17.381 [2024-07-14 21:10:28.755616] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69495) is not found. Dropping the request. 00:11:17.381 [2024-07-14 21:10:28.755755] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69495) is not found. Dropping the request. 00:11:17.381 [2024-07-14 21:10:28.755789] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69495) is not found. Dropping the request. 00:11:17.381 [2024-07-14 21:10:28.757586] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69495) is not found. Dropping the request. 00:11:17.381 [2024-07-14 21:10:28.757651] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69495) is not found. Dropping the request. 00:11:17.381 [2024-07-14 21:10:28.757670] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69495) is not found. Dropping the request. 00:11:17.381 [2024-07-14 21:10:28.759148] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69495) is not found. Dropping the request. 00:11:17.381 [2024-07-14 21:10:28.759193] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69495) is not found. Dropping the request. 00:11:17.381 [2024-07-14 21:10:28.759222] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69495) is not found. Dropping the request. 00:11:17.381 [2024-07-14 21:10:28.760632] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69495) is not found. Dropping the request. 00:11:17.381 [2024-07-14 21:10:28.760677] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69495) is not found. Dropping the request. 00:11:17.381 [2024-07-14 21:10:28.760695] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69495) is not found. Dropping the request. 00:11:17.381 Child process pid: 70011 00:11:17.640 [Child] Asynchronous Event Request test 00:11:17.640 [Child] Attached to 0000:00:10.0 00:11:17.640 [Child] Attached to 0000:00:11.0 00:11:17.640 [Child] Attached to 0000:00:13.0 00:11:17.640 [Child] Attached to 0000:00:12.0 00:11:17.640 [Child] Registering asynchronous event callbacks... 00:11:17.640 [Child] Getting orig temperature thresholds of all controllers 00:11:17.640 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:17.640 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:17.640 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:17.640 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:17.640 [Child] Waiting for all controllers to trigger AER and reset threshold 00:11:17.640 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:17.640 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:17.640 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:17.640 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:17.640 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:17.640 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:17.640 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:17.640 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:17.640 [Child] Cleaning up... 00:11:17.640 Asynchronous Event Request test 00:11:17.640 Attached to 0000:00:10.0 00:11:17.640 Attached to 0000:00:11.0 00:11:17.640 Attached to 0000:00:13.0 00:11:17.640 Attached to 0000:00:12.0 00:11:17.640 Reset controller to setup AER completions for this process 00:11:17.640 Registering asynchronous event callbacks... 00:11:17.640 Getting orig temperature thresholds of all controllers 00:11:17.640 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:17.640 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:17.640 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:17.640 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:17.640 Setting all controllers temperature threshold low to trigger AER 00:11:17.640 Waiting for all controllers temperature threshold to be set lower 00:11:17.640 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:17.640 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:17.640 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:17.640 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:17.640 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:17.640 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:17.640 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:17.640 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:17.640 Waiting for all controllers to trigger AER and reset threshold 00:11:17.640 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:17.640 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:17.640 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:17.640 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:17.640 Cleaning up... 00:11:17.640 00:11:17.640 real 0m0.540s 00:11:17.640 user 0m0.201s 00:11:17.640 sys 0m0.242s 00:11:17.640 21:10:29 nvme.nvme_multi_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:17.640 21:10:29 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:11:17.640 ************************************ 00:11:17.640 END TEST nvme_multi_aen 00:11:17.640 ************************************ 00:11:17.640 21:10:29 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:17.640 21:10:29 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:17.640 21:10:29 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:17.640 21:10:29 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.640 21:10:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:17.640 ************************************ 00:11:17.640 START TEST nvme_startup 00:11:17.640 ************************************ 00:11:17.640 21:10:29 nvme.nvme_startup -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:17.899 Initializing NVMe Controllers 00:11:17.899 Attached to 0000:00:10.0 00:11:17.899 Attached to 0000:00:11.0 00:11:17.899 Attached to 0000:00:13.0 00:11:17.899 Attached to 0000:00:12.0 00:11:17.899 Initialization complete. 00:11:17.899 Time used:170943.328 (us). 00:11:17.899 00:11:17.899 real 0m0.255s 00:11:17.899 user 0m0.103s 00:11:17.899 sys 0m0.104s 00:11:17.899 21:10:29 nvme.nvme_startup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:17.899 21:10:29 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:11:17.899 ************************************ 00:11:17.899 END TEST nvme_startup 00:11:17.899 ************************************ 00:11:17.899 21:10:29 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:17.899 21:10:29 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:11:17.899 21:10:29 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:17.899 21:10:29 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.899 21:10:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:17.899 ************************************ 00:11:17.899 START TEST nvme_multi_secondary 00:11:17.899 ************************************ 00:11:17.899 21:10:29 nvme.nvme_multi_secondary -- common/autotest_common.sh@1123 -- # nvme_multi_secondary 00:11:17.899 21:10:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=70067 00:11:17.899 21:10:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:11:17.899 21:10:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=70068 00:11:17.900 21:10:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:11:17.900 21:10:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:22.083 Initializing NVMe Controllers 00:11:22.083 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:22.083 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:22.083 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:22.083 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:22.083 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:22.083 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:22.083 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:22.083 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:22.083 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:22.083 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:22.083 Initialization complete. Launching workers. 00:11:22.083 ======================================================== 00:11:22.083 Latency(us) 00:11:22.083 Device Information : IOPS MiB/s Average min max 00:11:22.083 PCIE (0000:00:10.0) NSID 1 from core 2: 2613.07 10.21 6121.10 1942.95 14064.65 00:11:22.083 PCIE (0000:00:11.0) NSID 1 from core 2: 2613.07 10.21 6123.01 1905.77 13529.16 00:11:22.083 PCIE (0000:00:13.0) NSID 1 from core 2: 2613.07 10.21 6123.01 1912.69 14180.47 00:11:22.083 PCIE (0000:00:12.0) NSID 1 from core 2: 2613.07 10.21 6123.07 1936.28 14546.47 00:11:22.083 PCIE (0000:00:12.0) NSID 2 from core 2: 2613.07 10.21 6119.46 1729.64 14348.46 00:11:22.083 PCIE (0000:00:12.0) NSID 3 from core 2: 2613.07 10.21 6115.38 1858.70 14235.70 00:11:22.083 ======================================================== 00:11:22.083 Total : 15678.40 61.24 6120.84 1729.64 14546.47 00:11:22.083 00:11:22.083 Initializing NVMe Controllers 00:11:22.083 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:22.083 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:22.083 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:22.083 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:22.083 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:22.083 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:22.083 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:22.083 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:22.083 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:22.083 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:22.083 Initialization complete. Launching workers. 00:11:22.083 ======================================================== 00:11:22.083 Latency(us) 00:11:22.083 Device Information : IOPS MiB/s Average min max 00:11:22.083 PCIE (0000:00:10.0) NSID 1 from core 1: 5610.31 21.92 2850.21 1506.64 5207.96 00:11:22.083 PCIE (0000:00:11.0) NSID 1 from core 1: 5610.31 21.92 2851.27 1512.78 5012.18 00:11:22.083 PCIE (0000:00:13.0) NSID 1 from core 1: 5610.31 21.92 2851.19 1503.05 4964.33 00:11:22.083 PCIE (0000:00:12.0) NSID 1 from core 1: 5610.31 21.92 2851.12 1493.61 5161.80 00:11:22.083 PCIE (0000:00:12.0) NSID 2 from core 1: 5610.31 21.92 2851.05 1502.88 5566.56 00:11:22.083 PCIE (0000:00:12.0) NSID 3 from core 1: 5610.31 21.92 2850.96 1126.28 5226.45 00:11:22.083 ======================================================== 00:11:22.083 Total : 33661.86 131.49 2850.97 1126.28 5566.56 00:11:22.083 00:11:22.083 21:10:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 70067 00:11:23.453 Initializing NVMe Controllers 00:11:23.453 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:23.453 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:23.453 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:23.453 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:23.453 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:23.453 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:23.453 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:23.453 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:23.453 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:23.453 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:23.453 Initialization complete. Launching workers. 00:11:23.453 ======================================================== 00:11:23.453 Latency(us) 00:11:23.453 Device Information : IOPS MiB/s Average min max 00:11:23.453 PCIE (0000:00:10.0) NSID 1 from core 0: 8693.61 33.96 1839.00 928.26 4736.61 00:11:23.453 PCIE (0000:00:11.0) NSID 1 from core 0: 8693.61 33.96 1839.98 966.49 5225.78 00:11:23.453 PCIE (0000:00:13.0) NSID 1 from core 0: 8693.61 33.96 1839.95 910.61 5339.99 00:11:23.453 PCIE (0000:00:12.0) NSID 1 from core 0: 8693.61 33.96 1839.91 847.16 5244.78 00:11:23.453 PCIE (0000:00:12.0) NSID 2 from core 0: 8693.61 33.96 1839.88 789.08 5089.62 00:11:23.453 PCIE (0000:00:12.0) NSID 3 from core 0: 8696.81 33.97 1839.17 710.27 4619.72 00:11:23.453 ======================================================== 00:11:23.453 Total : 52164.85 203.77 1839.65 710.27 5339.99 00:11:23.453 00:11:23.453 21:10:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 70068 00:11:23.453 21:10:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=70132 00:11:23.453 21:10:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:11:23.453 21:10:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=70133 00:11:23.453 21:10:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:23.453 21:10:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:11:26.730 Initializing NVMe Controllers 00:11:26.730 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:26.730 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:26.730 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:26.730 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:26.730 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:26.730 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:26.730 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:26.730 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:26.730 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:26.731 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:26.731 Initialization complete. Launching workers. 00:11:26.731 ======================================================== 00:11:26.731 Latency(us) 00:11:26.731 Device Information : IOPS MiB/s Average min max 00:11:26.731 PCIE (0000:00:10.0) NSID 1 from core 0: 6030.96 23.56 2651.40 1014.78 5927.18 00:11:26.731 PCIE (0000:00:11.0) NSID 1 from core 0: 6030.96 23.56 2652.51 1020.55 6079.69 00:11:26.731 PCIE (0000:00:13.0) NSID 1 from core 0: 6030.96 23.56 2652.47 1033.19 5371.49 00:11:26.731 PCIE (0000:00:12.0) NSID 1 from core 0: 6036.29 23.58 2650.20 1022.17 5057.32 00:11:26.731 PCIE (0000:00:12.0) NSID 2 from core 0: 6036.29 23.58 2650.34 1029.72 5382.10 00:11:26.731 PCIE (0000:00:12.0) NSID 3 from core 0: 6036.29 23.58 2650.30 1016.27 5422.67 00:11:26.731 ======================================================== 00:11:26.731 Total : 36201.76 141.41 2651.20 1014.78 6079.69 00:11:26.731 00:11:26.731 Initializing NVMe Controllers 00:11:26.731 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:26.731 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:26.731 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:26.731 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:26.731 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:26.731 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:26.731 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:26.731 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:26.731 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:26.731 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:26.731 Initialization complete. Launching workers. 00:11:26.731 ======================================================== 00:11:26.731 Latency(us) 00:11:26.731 Device Information : IOPS MiB/s Average min max 00:11:26.731 PCIE (0000:00:10.0) NSID 1 from core 1: 5572.52 21.77 2869.48 1009.72 5721.72 00:11:26.731 PCIE (0000:00:11.0) NSID 1 from core 1: 5572.52 21.77 2870.57 1036.98 5602.72 00:11:26.731 PCIE (0000:00:13.0) NSID 1 from core 1: 5572.52 21.77 2870.45 946.15 5427.12 00:11:26.731 PCIE (0000:00:12.0) NSID 1 from core 1: 5572.52 21.77 2870.32 888.07 5843.62 00:11:26.731 PCIE (0000:00:12.0) NSID 2 from core 1: 5572.52 21.77 2870.20 884.46 5942.70 00:11:26.731 PCIE (0000:00:12.0) NSID 3 from core 1: 5572.52 21.77 2870.09 843.12 5906.29 00:11:26.731 ======================================================== 00:11:26.731 Total : 33435.11 130.61 2870.19 843.12 5942.70 00:11:26.731 00:11:29.263 Initializing NVMe Controllers 00:11:29.263 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:29.263 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:29.263 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:29.263 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:29.263 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:29.263 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:29.263 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:29.263 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:29.263 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:29.263 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:29.263 Initialization complete. Launching workers. 00:11:29.263 ======================================================== 00:11:29.263 Latency(us) 00:11:29.263 Device Information : IOPS MiB/s Average min max 00:11:29.263 PCIE (0000:00:10.0) NSID 1 from core 2: 4079.28 15.93 3920.74 988.88 29028.75 00:11:29.263 PCIE (0000:00:11.0) NSID 1 from core 2: 4079.28 15.93 3921.29 974.81 29200.12 00:11:29.263 PCIE (0000:00:13.0) NSID 1 from core 2: 4079.28 15.93 3921.12 1007.53 24669.61 00:11:29.263 PCIE (0000:00:12.0) NSID 1 from core 2: 4079.28 15.93 3921.24 980.36 24485.80 00:11:29.263 PCIE (0000:00:12.0) NSID 2 from core 2: 4079.28 15.93 3921.14 878.82 28961.64 00:11:29.263 PCIE (0000:00:12.0) NSID 3 from core 2: 4079.28 15.93 3921.09 771.82 28975.61 00:11:29.263 ======================================================== 00:11:29.263 Total : 24475.68 95.61 3921.10 771.82 29200.12 00:11:29.263 00:11:29.263 ************************************ 00:11:29.263 END TEST nvme_multi_secondary 00:11:29.263 ************************************ 00:11:29.263 21:10:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 70132 00:11:29.263 21:10:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 70133 00:11:29.263 00:11:29.263 real 0m10.849s 00:11:29.263 user 0m18.591s 00:11:29.263 sys 0m0.809s 00:11:29.263 21:10:40 nvme.nvme_multi_secondary -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:29.263 21:10:40 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:11:29.263 21:10:40 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:29.263 21:10:40 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:11:29.263 21:10:40 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:11:29.263 21:10:40 nvme -- common/autotest_common.sh@1087 -- # [[ -e /proc/69079 ]] 00:11:29.263 21:10:40 nvme -- common/autotest_common.sh@1088 -- # kill 69079 00:11:29.263 21:10:40 nvme -- common/autotest_common.sh@1089 -- # wait 69079 00:11:29.263 [2024-07-14 21:10:40.317742] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70010) is not found. Dropping the request. 00:11:29.263 [2024-07-14 21:10:40.317880] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70010) is not found. Dropping the request. 00:11:29.263 [2024-07-14 21:10:40.317915] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70010) is not found. Dropping the request. 00:11:29.263 [2024-07-14 21:10:40.317944] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70010) is not found. Dropping the request. 00:11:29.263 [2024-07-14 21:10:40.321012] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70010) is not found. Dropping the request. 00:11:29.263 [2024-07-14 21:10:40.321085] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70010) is not found. Dropping the request. 00:11:29.263 [2024-07-14 21:10:40.321114] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70010) is not found. Dropping the request. 00:11:29.263 [2024-07-14 21:10:40.321166] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70010) is not found. Dropping the request. 00:11:29.263 [2024-07-14 21:10:40.323853] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70010) is not found. Dropping the request. 00:11:29.263 [2024-07-14 21:10:40.323901] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70010) is not found. Dropping the request. 00:11:29.263 [2024-07-14 21:10:40.323922] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70010) is not found. Dropping the request. 00:11:29.263 [2024-07-14 21:10:40.323944] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70010) is not found. Dropping the request. 00:11:29.263 [2024-07-14 21:10:40.326198] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70010) is not found. Dropping the request. 00:11:29.263 [2024-07-14 21:10:40.326286] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70010) is not found. Dropping the request. 00:11:29.263 [2024-07-14 21:10:40.326309] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70010) is not found. Dropping the request. 00:11:29.263 [2024-07-14 21:10:40.326330] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70010) is not found. Dropping the request. 00:11:29.263 21:10:40 nvme -- common/autotest_common.sh@1091 -- # rm -f /var/run/spdk_stub0 00:11:29.263 21:10:40 nvme -- common/autotest_common.sh@1095 -- # echo 2 00:11:29.263 21:10:40 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:29.263 21:10:40 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:29.263 21:10:40 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.263 21:10:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:29.263 ************************************ 00:11:29.263 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:29.263 ************************************ 00:11:29.263 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:29.263 * Looking for test storage... 00:11:29.263 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:29.263 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:29.263 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:29.263 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:29.263 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:29.263 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:29.263 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:29.263 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:11:29.263 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:11:29.263 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:11:29.263 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:11:29.263 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:11:29.263 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:11:29.263 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:29.264 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:29.264 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:11:29.264 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:11:29.264 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:29.264 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:11:29.264 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:11:29.264 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:11:29.264 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=70293 00:11:29.264 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:29.264 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:29.264 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 70293 00:11:29.264 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 70293 ']' 00:11:29.264 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.264 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:29.264 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.264 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:29.264 21:10:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:29.523 [2024-07-14 21:10:40.883312] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:29.523 [2024-07-14 21:10:40.883468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70293 ] 00:11:29.523 [2024-07-14 21:10:41.061269] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:29.782 [2024-07-14 21:10:41.295773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.782 [2024-07-14 21:10:41.295942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:29.782 [2024-07-14 21:10:41.296435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:29.782 [2024-07-14 21:10:41.296444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.718 21:10:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:30.718 21:10:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:11:30.718 21:10:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:11:30.718 21:10:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.718 21:10:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:30.718 nvme0n1 00:11:30.718 21:10:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.718 21:10:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:30.718 21:10:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_DnX9d.txt 00:11:30.718 21:10:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:30.718 21:10:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.718 21:10:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:30.718 true 00:11:30.718 21:10:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.718 21:10:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:30.718 21:10:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1720991442 00:11:30.718 21:10:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=70316 00:11:30.718 21:10:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:30.718 21:10:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:30.718 21:10:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:32.619 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:32.619 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.619 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:32.619 [2024-07-14 21:10:44.111465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:11:32.619 [2024-07-14 21:10:44.111874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:32.619 [2024-07-14 21:10:44.111914] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:32.619 [2024-07-14 21:10:44.111938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.619 [2024-07-14 21:10:44.114030] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:32.619 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.619 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 70316 00:11:32.619 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 70316 00:11:32.619 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 70316 00:11:32.619 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:32.619 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:11:32.619 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:32.619 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.619 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:32.619 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.619 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:32.619 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_DnX9d.txt 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_DnX9d.txt 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 70293 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 70293 ']' 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 70293 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70293 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:32.878 killing process with pid 70293 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70293' 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 70293 00:11:32.878 21:10:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 70293 00:11:35.431 21:10:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:35.431 21:10:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:35.431 00:11:35.431 real 0m5.852s 00:11:35.431 user 0m20.239s 00:11:35.431 sys 0m0.590s 00:11:35.431 21:10:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:35.431 ************************************ 00:11:35.431 21:10:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:35.431 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:35.431 ************************************ 00:11:35.431 21:10:46 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:35.431 21:10:46 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:35.431 21:10:46 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:35.431 21:10:46 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:35.431 21:10:46 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:35.431 21:10:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:35.431 ************************************ 00:11:35.431 START TEST nvme_fio 00:11:35.431 ************************************ 00:11:35.431 21:10:46 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:11:35.431 21:10:46 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:35.431 21:10:46 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:35.431 21:10:46 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:35.431 21:10:46 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:11:35.431 21:10:46 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:11:35.431 21:10:46 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:35.431 21:10:46 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:35.431 21:10:46 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:11:35.431 21:10:46 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:11:35.431 21:10:46 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:35.431 21:10:46 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:11:35.431 21:10:46 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:35.431 21:10:46 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:35.431 21:10:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:35.431 21:10:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:35.431 21:10:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:35.431 21:10:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:35.691 21:10:47 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:35.691 21:10:47 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:35.691 21:10:47 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:35.691 21:10:47 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:35.691 21:10:47 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:35.691 21:10:47 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:35.691 21:10:47 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:35.691 21:10:47 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:11:35.691 21:10:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:35.691 21:10:47 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:35.691 21:10:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:35.691 21:10:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:35.691 21:10:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:11:35.691 21:10:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:35.691 21:10:47 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:35.691 21:10:47 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:11:35.691 21:10:47 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:35.691 21:10:47 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:35.951 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:35.951 fio-3.35 00:11:35.951 Starting 1 thread 00:11:39.238 00:11:39.238 test: (groupid=0, jobs=1): err= 0: pid=70461: Sun Jul 14 21:10:50 2024 00:11:39.238 read: IOPS=16.9k, BW=66.1MiB/s (69.3MB/s)(132MiB/2001msec) 00:11:39.238 slat (nsec): min=4647, max=89445, avg=5794.45, stdev=1632.05 00:11:39.238 clat (usec): min=274, max=8626, avg=3756.54, stdev=384.03 00:11:39.238 lat (usec): min=283, max=8667, avg=3762.33, stdev=384.59 00:11:39.238 clat percentiles (usec): 00:11:39.238 | 1.00th=[ 3228], 5.00th=[ 3392], 10.00th=[ 3458], 20.00th=[ 3523], 00:11:39.238 | 30.00th=[ 3589], 40.00th=[ 3621], 50.00th=[ 3654], 60.00th=[ 3720], 00:11:39.238 | 70.00th=[ 3785], 80.00th=[ 3916], 90.00th=[ 4293], 95.00th=[ 4424], 00:11:39.238 | 99.00th=[ 4621], 99.50th=[ 4883], 99.90th=[ 7832], 99.95th=[ 7898], 00:11:39.238 | 99.99th=[ 8455] 00:11:39.238 bw ( KiB/s): min=64520, max=69864, per=98.96%, avg=66992.00, stdev=2694.36, samples=3 00:11:39.238 iops : min=16130, max=17466, avg=16748.00, stdev=673.59, samples=3 00:11:39.238 write: IOPS=17.0k, BW=66.3MiB/s (69.5MB/s)(133MiB/2001msec); 0 zone resets 00:11:39.238 slat (nsec): min=4736, max=87272, avg=5975.59, stdev=1731.44 00:11:39.238 clat (usec): min=323, max=8463, avg=3765.78, stdev=383.75 00:11:39.238 lat (usec): min=329, max=8490, avg=3771.75, stdev=384.36 00:11:39.238 clat percentiles (usec): 00:11:39.238 | 1.00th=[ 3228], 5.00th=[ 3392], 10.00th=[ 3458], 20.00th=[ 3523], 00:11:39.238 | 30.00th=[ 3589], 40.00th=[ 3621], 50.00th=[ 3687], 60.00th=[ 3720], 00:11:39.238 | 70.00th=[ 3785], 80.00th=[ 3916], 90.00th=[ 4293], 95.00th=[ 4424], 00:11:39.238 | 99.00th=[ 4621], 99.50th=[ 4883], 99.90th=[ 7767], 99.95th=[ 7963], 00:11:39.238 | 99.99th=[ 8225] 00:11:39.238 bw ( KiB/s): min=64200, max=69864, per=98.64%, avg=66960.00, stdev=2834.74, samples=3 00:11:39.238 iops : min=16050, max=17466, avg=16740.00, stdev=708.69, samples=3 00:11:39.238 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:11:39.238 lat (msec) : 2=0.05%, 4=81.55%, 10=18.36% 00:11:39.238 cpu : usr=98.90%, sys=0.20%, ctx=3, majf=0, minf=605 00:11:39.238 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:39.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:39.238 issued rwts: total=33866,33960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.238 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:39.238 00:11:39.238 Run status group 0 (all jobs): 00:11:39.238 READ: bw=66.1MiB/s (69.3MB/s), 66.1MiB/s-66.1MiB/s (69.3MB/s-69.3MB/s), io=132MiB (139MB), run=2001-2001msec 00:11:39.238 WRITE: bw=66.3MiB/s (69.5MB/s), 66.3MiB/s-66.3MiB/s (69.5MB/s-69.5MB/s), io=133MiB (139MB), run=2001-2001msec 00:11:39.238 ----------------------------------------------------- 00:11:39.238 Suppressions used: 00:11:39.238 count bytes template 00:11:39.238 1 32 /usr/src/fio/parse.c 00:11:39.238 1 8 libtcmalloc_minimal.so 00:11:39.238 ----------------------------------------------------- 00:11:39.238 00:11:39.238 21:10:50 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:39.238 21:10:50 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:39.238 21:10:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:39.238 21:10:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:39.498 21:10:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:39.498 21:10:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:40.066 21:10:51 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:40.066 21:10:51 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:40.066 21:10:51 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:40.066 21:10:51 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:40.066 21:10:51 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:40.066 21:10:51 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:40.066 21:10:51 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:40.066 21:10:51 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:11:40.066 21:10:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:40.066 21:10:51 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:40.066 21:10:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:40.066 21:10:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:11:40.066 21:10:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:40.066 21:10:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:40.066 21:10:51 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:40.066 21:10:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:11:40.066 21:10:51 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:40.066 21:10:51 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:40.066 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:40.066 fio-3.35 00:11:40.066 Starting 1 thread 00:11:43.348 00:11:43.348 test: (groupid=0, jobs=1): err= 0: pid=70526: Sun Jul 14 21:10:54 2024 00:11:43.348 read: IOPS=17.1k, BW=66.8MiB/s (70.1MB/s)(134MiB/2001msec) 00:11:43.348 slat (nsec): min=4328, max=52027, avg=5754.39, stdev=1755.75 00:11:43.348 clat (usec): min=286, max=9098, avg=3716.30, stdev=369.61 00:11:43.348 lat (usec): min=292, max=9150, avg=3722.06, stdev=370.09 00:11:43.348 clat percentiles (usec): 00:11:43.348 | 1.00th=[ 3097], 5.00th=[ 3294], 10.00th=[ 3392], 20.00th=[ 3490], 00:11:43.348 | 30.00th=[ 3556], 40.00th=[ 3621], 50.00th=[ 3654], 60.00th=[ 3720], 00:11:43.348 | 70.00th=[ 3785], 80.00th=[ 3884], 90.00th=[ 4113], 95.00th=[ 4490], 00:11:43.348 | 99.00th=[ 4752], 99.50th=[ 4948], 99.90th=[ 5669], 99.95th=[ 7439], 00:11:43.348 | 99.99th=[ 8979] 00:11:43.349 bw ( KiB/s): min=63137, max=69976, per=98.10%, avg=67144.33, stdev=3567.86, samples=3 00:11:43.349 iops : min=15784, max=17494, avg=16786.00, stdev=892.11, samples=3 00:11:43.349 write: IOPS=17.1k, BW=67.0MiB/s (70.2MB/s)(134MiB/2001msec); 0 zone resets 00:11:43.349 slat (nsec): min=4464, max=50793, avg=5902.19, stdev=1795.73 00:11:43.349 clat (usec): min=255, max=8988, avg=3728.79, stdev=372.99 00:11:43.349 lat (usec): min=260, max=9009, avg=3734.69, stdev=373.48 00:11:43.349 clat percentiles (usec): 00:11:43.349 | 1.00th=[ 3130], 5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3490], 00:11:43.349 | 30.00th=[ 3556], 40.00th=[ 3621], 50.00th=[ 3687], 60.00th=[ 3720], 00:11:43.349 | 70.00th=[ 3785], 80.00th=[ 3884], 90.00th=[ 4146], 95.00th=[ 4490], 00:11:43.349 | 99.00th=[ 4817], 99.50th=[ 4948], 99.90th=[ 6259], 99.95th=[ 7635], 00:11:43.349 | 99.99th=[ 8717] 00:11:43.349 bw ( KiB/s): min=63441, max=69648, per=97.80%, avg=67051.00, stdev=3225.11, samples=3 00:11:43.349 iops : min=15860, max=17412, avg=16762.67, stdev=806.42, samples=3 00:11:43.349 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:11:43.349 lat (msec) : 2=0.16%, 4=86.44%, 10=13.36% 00:11:43.349 cpu : usr=98.95%, sys=0.15%, ctx=4, majf=0, minf=606 00:11:43.349 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:43.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:43.349 issued rwts: total=34238,34298,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:43.349 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:43.349 00:11:43.349 Run status group 0 (all jobs): 00:11:43.349 READ: bw=66.8MiB/s (70.1MB/s), 66.8MiB/s-66.8MiB/s (70.1MB/s-70.1MB/s), io=134MiB (140MB), run=2001-2001msec 00:11:43.349 WRITE: bw=67.0MiB/s (70.2MB/s), 67.0MiB/s-67.0MiB/s (70.2MB/s-70.2MB/s), io=134MiB (140MB), run=2001-2001msec 00:11:43.607 ----------------------------------------------------- 00:11:43.607 Suppressions used: 00:11:43.607 count bytes template 00:11:43.607 1 32 /usr/src/fio/parse.c 00:11:43.607 1 8 libtcmalloc_minimal.so 00:11:43.607 ----------------------------------------------------- 00:11:43.607 00:11:43.607 21:10:54 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:43.607 21:10:54 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:43.607 21:10:54 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:43.607 21:10:54 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:43.864 21:10:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:43.864 21:10:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:44.123 21:10:55 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:44.123 21:10:55 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:44.123 21:10:55 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:44.123 21:10:55 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:44.123 21:10:55 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:44.123 21:10:55 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:44.123 21:10:55 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:44.123 21:10:55 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:11:44.123 21:10:55 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:44.123 21:10:55 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:44.123 21:10:55 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:44.123 21:10:55 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:11:44.123 21:10:55 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:44.123 21:10:55 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:44.123 21:10:55 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:44.123 21:10:55 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:11:44.123 21:10:55 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:44.123 21:10:55 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:44.381 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:44.381 fio-3.35 00:11:44.381 Starting 1 thread 00:11:47.666 00:11:47.666 test: (groupid=0, jobs=1): err= 0: pid=70588: Sun Jul 14 21:10:58 2024 00:11:47.666 read: IOPS=16.7k, BW=65.4MiB/s (68.5MB/s)(131MiB/2001msec) 00:11:47.666 slat (nsec): min=4336, max=50677, avg=5804.72, stdev=1910.52 00:11:47.666 clat (usec): min=320, max=8098, avg=3802.44, stdev=363.13 00:11:47.666 lat (usec): min=325, max=8148, avg=3808.24, stdev=363.56 00:11:47.666 clat percentiles (usec): 00:11:47.666 | 1.00th=[ 3326], 5.00th=[ 3425], 10.00th=[ 3490], 20.00th=[ 3556], 00:11:47.666 | 30.00th=[ 3621], 40.00th=[ 3687], 50.00th=[ 3720], 60.00th=[ 3785], 00:11:47.666 | 70.00th=[ 3851], 80.00th=[ 3982], 90.00th=[ 4228], 95.00th=[ 4424], 00:11:47.666 | 99.00th=[ 4948], 99.50th=[ 5407], 99.90th=[ 7242], 99.95th=[ 7373], 00:11:47.666 | 99.99th=[ 7963] 00:11:47.666 bw ( KiB/s): min=64704, max=68584, per=98.73%, avg=66077.33, stdev=2174.15, samples=3 00:11:47.666 iops : min=16176, max=17146, avg=16519.33, stdev=543.54, samples=3 00:11:47.666 write: IOPS=16.8k, BW=65.5MiB/s (68.7MB/s)(131MiB/2001msec); 0 zone resets 00:11:47.666 slat (nsec): min=4491, max=48488, avg=5939.55, stdev=1887.03 00:11:47.666 clat (usec): min=275, max=7990, avg=3811.15, stdev=367.02 00:11:47.666 lat (usec): min=280, max=8008, avg=3817.09, stdev=367.41 00:11:47.666 clat percentiles (usec): 00:11:47.666 | 1.00th=[ 3326], 5.00th=[ 3458], 10.00th=[ 3490], 20.00th=[ 3589], 00:11:47.666 | 30.00th=[ 3621], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3785], 00:11:47.666 | 70.00th=[ 3884], 80.00th=[ 3982], 90.00th=[ 4228], 95.00th=[ 4424], 00:11:47.666 | 99.00th=[ 5080], 99.50th=[ 5473], 99.90th=[ 7242], 99.95th=[ 7439], 00:11:47.666 | 99.99th=[ 7635] 00:11:47.666 bw ( KiB/s): min=64216, max=68536, per=98.46%, avg=66021.33, stdev=2245.65, samples=3 00:11:47.666 iops : min=16054, max=17134, avg=16505.33, stdev=561.41, samples=3 00:11:47.666 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:47.666 lat (msec) : 2=0.06%, 4=81.13%, 10=18.78% 00:11:47.666 cpu : usr=99.05%, sys=0.10%, ctx=4, majf=0, minf=606 00:11:47.666 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:47.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:47.666 issued rwts: total=33481,33544,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:47.666 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:47.666 00:11:47.666 Run status group 0 (all jobs): 00:11:47.666 READ: bw=65.4MiB/s (68.5MB/s), 65.4MiB/s-65.4MiB/s (68.5MB/s-68.5MB/s), io=131MiB (137MB), run=2001-2001msec 00:11:47.666 WRITE: bw=65.5MiB/s (68.7MB/s), 65.5MiB/s-65.5MiB/s (68.7MB/s-68.7MB/s), io=131MiB (137MB), run=2001-2001msec 00:11:47.666 ----------------------------------------------------- 00:11:47.666 Suppressions used: 00:11:47.666 count bytes template 00:11:47.666 1 32 /usr/src/fio/parse.c 00:11:47.666 1 8 libtcmalloc_minimal.so 00:11:47.666 ----------------------------------------------------- 00:11:47.666 00:11:47.666 21:10:59 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:47.666 21:10:59 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:47.666 21:10:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:47.666 21:10:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:48.233 21:10:59 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:48.233 21:10:59 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:48.233 21:10:59 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:48.233 21:10:59 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:48.233 21:10:59 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:48.233 21:10:59 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:48.233 21:10:59 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:48.233 21:10:59 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:48.233 21:10:59 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:48.233 21:10:59 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:11:48.233 21:10:59 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:48.233 21:10:59 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:48.233 21:10:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:48.233 21:10:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:11:48.233 21:10:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:48.491 21:10:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:48.491 21:10:59 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:48.491 21:10:59 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:11:48.491 21:10:59 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:48.491 21:10:59 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:48.491 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:48.491 fio-3.35 00:11:48.491 Starting 1 thread 00:11:53.880 00:11:53.880 test: (groupid=0, jobs=1): err= 0: pid=70652: Sun Jul 14 21:11:04 2024 00:11:53.880 read: IOPS=16.8k, BW=65.8MiB/s (69.0MB/s)(132MiB/2001msec) 00:11:53.880 slat (nsec): min=4520, max=51302, avg=5983.88, stdev=1835.07 00:11:53.880 clat (usec): min=389, max=9900, avg=3779.81, stdev=426.31 00:11:53.880 lat (usec): min=396, max=9950, avg=3785.79, stdev=426.91 00:11:53.880 clat percentiles (usec): 00:11:53.880 | 1.00th=[ 3130], 5.00th=[ 3425], 10.00th=[ 3490], 20.00th=[ 3556], 00:11:53.880 | 30.00th=[ 3589], 40.00th=[ 3654], 50.00th=[ 3687], 60.00th=[ 3752], 00:11:53.880 | 70.00th=[ 3785], 80.00th=[ 3884], 90.00th=[ 4228], 95.00th=[ 4621], 00:11:53.880 | 99.00th=[ 5080], 99.50th=[ 6325], 99.90th=[ 6915], 99.95th=[ 8455], 00:11:53.880 | 99.99th=[ 9765] 00:11:53.880 bw ( KiB/s): min=61445, max=70464, per=99.18%, avg=66783.00, stdev=4732.32, samples=3 00:11:53.880 iops : min=15361, max=17616, avg=16695.67, stdev=1183.22, samples=3 00:11:53.880 write: IOPS=16.9k, BW=65.9MiB/s (69.1MB/s)(132MiB/2001msec); 0 zone resets 00:11:53.880 slat (nsec): min=4460, max=71074, avg=6109.65, stdev=1823.99 00:11:53.880 clat (usec): min=362, max=9784, avg=3787.51, stdev=432.81 00:11:53.880 lat (usec): min=369, max=9801, avg=3793.62, stdev=433.42 00:11:53.880 clat percentiles (usec): 00:11:53.880 | 1.00th=[ 3130], 5.00th=[ 3425], 10.00th=[ 3490], 20.00th=[ 3556], 00:11:53.880 | 30.00th=[ 3621], 40.00th=[ 3654], 50.00th=[ 3687], 60.00th=[ 3752], 00:11:53.880 | 70.00th=[ 3785], 80.00th=[ 3884], 90.00th=[ 4293], 95.00th=[ 4686], 00:11:53.880 | 99.00th=[ 5080], 99.50th=[ 6325], 99.90th=[ 7242], 99.95th=[ 8586], 00:11:53.880 | 99.99th=[ 9503] 00:11:53.880 bw ( KiB/s): min=61748, max=70312, per=98.91%, avg=66724.00, stdev=4447.52, samples=3 00:11:53.880 iops : min=15437, max=17578, avg=16681.00, stdev=1111.88, samples=3 00:11:53.880 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:53.880 lat (msec) : 2=0.05%, 4=86.68%, 10=13.25% 00:11:53.880 cpu : usr=99.00%, sys=0.15%, ctx=4, majf=0, minf=604 00:11:53.880 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:53.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:53.880 issued rwts: total=33686,33748,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.880 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:53.880 00:11:53.880 Run status group 0 (all jobs): 00:11:53.880 READ: bw=65.8MiB/s (69.0MB/s), 65.8MiB/s-65.8MiB/s (69.0MB/s-69.0MB/s), io=132MiB (138MB), run=2001-2001msec 00:11:53.880 WRITE: bw=65.9MiB/s (69.1MB/s), 65.9MiB/s-65.9MiB/s (69.1MB/s-69.1MB/s), io=132MiB (138MB), run=2001-2001msec 00:11:53.880 ----------------------------------------------------- 00:11:53.880 Suppressions used: 00:11:53.880 count bytes template 00:11:53.880 1 32 /usr/src/fio/parse.c 00:11:53.880 1 8 libtcmalloc_minimal.so 00:11:53.880 ----------------------------------------------------- 00:11:53.880 00:11:53.880 21:11:04 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:53.880 21:11:04 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:11:53.880 00:11:53.880 real 0m18.055s 00:11:53.880 user 0m14.105s 00:11:53.880 sys 0m3.482s 00:11:53.880 21:11:04 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:53.880 ************************************ 00:11:53.880 21:11:04 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:11:53.880 END TEST nvme_fio 00:11:53.880 ************************************ 00:11:53.880 21:11:04 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:53.880 00:11:53.880 real 1m31.456s 00:11:53.880 user 3m44.804s 00:11:53.880 sys 0m15.224s 00:11:53.880 21:11:04 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:53.880 ************************************ 00:11:53.880 END TEST nvme 00:11:53.880 21:11:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:53.880 ************************************ 00:11:53.880 21:11:04 -- common/autotest_common.sh@1142 -- # return 0 00:11:53.880 21:11:04 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:11:53.880 21:11:04 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:53.880 21:11:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:53.880 21:11:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:53.880 21:11:04 -- common/autotest_common.sh@10 -- # set +x 00:11:53.880 ************************************ 00:11:53.880 START TEST nvme_scc 00:11:53.880 ************************************ 00:11:53.880 21:11:04 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:53.880 * Looking for test storage... 00:11:53.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:53.880 21:11:04 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:53.880 21:11:04 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:53.880 21:11:04 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:53.880 21:11:04 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:53.880 21:11:04 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:53.880 21:11:04 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.880 21:11:04 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.880 21:11:04 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.880 21:11:04 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.880 21:11:04 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.880 21:11:04 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.880 21:11:04 nvme_scc -- paths/export.sh@5 -- # export PATH 00:11:53.880 21:11:04 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.880 21:11:04 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:11:53.880 21:11:04 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:53.880 21:11:04 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:11:53.880 21:11:04 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:53.880 21:11:04 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:11:53.880 21:11:04 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:53.880 21:11:04 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:53.880 21:11:04 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:53.880 21:11:04 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:11:53.880 21:11:04 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:53.880 21:11:04 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:11:53.880 21:11:04 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:53.880 21:11:04 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:53.880 21:11:04 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:53.880 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:53.880 Waiting for block devices as requested 00:11:53.880 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:54.138 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:54.138 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:54.138 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:59.418 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:59.418 21:11:10 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:59.418 21:11:10 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:59.418 21:11:10 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:59.418 21:11:10 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:59.418 21:11:10 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.418 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.419 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:59.420 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:59.421 21:11:10 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:59.421 21:11:10 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:59.421 21:11:10 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:59.421 21:11:10 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.421 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.422 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.423 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.424 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:59.688 21:11:10 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:59.688 21:11:10 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:59.688 21:11:10 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:59.688 21:11:10 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:59.688 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:59.689 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.690 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:59.691 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.692 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.693 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.694 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:59.695 21:11:11 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:59.695 21:11:11 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:59.695 21:11:11 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:59.695 21:11:11 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.695 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:59.696 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:59.697 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:59.698 21:11:11 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:59.698 21:11:11 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:11:59.699 21:11:11 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:11:59.699 21:11:11 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:11:59.699 21:11:11 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:11:59.699 21:11:11 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:59.699 21:11:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:59.699 21:11:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:59.699 21:11:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:59.699 21:11:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:59.699 21:11:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:59.699 21:11:11 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:59.699 21:11:11 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:59.699 21:11:11 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:11:59.699 21:11:11 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:11:59.699 21:11:11 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:11:59.699 21:11:11 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:11:59.699 21:11:11 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:59.699 21:11:11 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:11:59.699 21:11:11 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:00.268 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:00.863 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:00.863 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:00.863 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:00.863 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:01.121 21:11:12 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:01.121 21:11:12 nvme_scc -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:01.121 21:11:12 nvme_scc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:01.121 21:11:12 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:12:01.121 ************************************ 00:12:01.121 START TEST nvme_simple_copy 00:12:01.121 ************************************ 00:12:01.121 21:11:12 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:01.378 Initializing NVMe Controllers 00:12:01.378 Attaching to 0000:00:10.0 00:12:01.378 Controller supports SCC. Attached to 0000:00:10.0 00:12:01.378 Namespace ID: 1 size: 6GB 00:12:01.378 Initialization complete. 00:12:01.378 00:12:01.378 Controller QEMU NVMe Ctrl (12340 ) 00:12:01.378 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:12:01.378 Namespace Block Size:4096 00:12:01.378 Writing LBAs 0 to 63 with Random Data 00:12:01.378 Copied LBAs from 0 - 63 to the Destination LBA 256 00:12:01.378 LBAs matching Written Data: 64 00:12:01.378 00:12:01.378 real 0m0.314s 00:12:01.378 user 0m0.121s 00:12:01.378 sys 0m0.091s 00:12:01.378 21:11:12 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:01.378 21:11:12 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:12:01.378 ************************************ 00:12:01.378 END TEST nvme_simple_copy 00:12:01.378 ************************************ 00:12:01.378 21:11:12 nvme_scc -- common/autotest_common.sh@1142 -- # return 0 00:12:01.378 00:12:01.378 real 0m8.109s 00:12:01.378 user 0m1.315s 00:12:01.378 sys 0m1.748s 00:12:01.378 21:11:12 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:01.378 21:11:12 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:12:01.378 ************************************ 00:12:01.378 END TEST nvme_scc 00:12:01.378 ************************************ 00:12:01.378 21:11:12 -- common/autotest_common.sh@1142 -- # return 0 00:12:01.378 21:11:12 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:12:01.378 21:11:12 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:12:01.378 21:11:12 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:12:01.378 21:11:12 -- spdk/autotest.sh@232 -- # [[ 1 -eq 1 ]] 00:12:01.378 21:11:12 -- spdk/autotest.sh@233 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:12:01.378 21:11:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:01.378 21:11:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:01.378 21:11:12 -- common/autotest_common.sh@10 -- # set +x 00:12:01.378 ************************************ 00:12:01.378 START TEST nvme_fdp 00:12:01.378 ************************************ 00:12:01.378 21:11:12 nvme_fdp -- common/autotest_common.sh@1123 -- # test/nvme/nvme_fdp.sh 00:12:01.636 * Looking for test storage... 00:12:01.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:01.636 21:11:12 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:01.636 21:11:12 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:01.636 21:11:12 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:01.636 21:11:12 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:01.636 21:11:12 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:01.636 21:11:12 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.636 21:11:12 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.636 21:11:12 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.636 21:11:12 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.636 21:11:12 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.636 21:11:12 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.636 21:11:12 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:12:01.636 21:11:12 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.636 21:11:12 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:12:01.636 21:11:12 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:01.636 21:11:12 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:12:01.636 21:11:12 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:01.636 21:11:12 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:12:01.636 21:11:12 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:01.636 21:11:12 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:01.636 21:11:12 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:01.636 21:11:12 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:12:01.636 21:11:12 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.636 21:11:12 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:01.894 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:02.150 Waiting for block devices as requested 00:12:02.150 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:02.150 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:02.407 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:02.407 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:07.680 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:07.680 21:11:18 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:12:07.680 21:11:18 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:07.680 21:11:18 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:07.680 21:11:18 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:07.680 21:11:18 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:07.680 21:11:18 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:07.680 21:11:18 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:07.680 21:11:18 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:12:07.681 21:11:18 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:07.681 21:11:18 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.681 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:07.682 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:07.683 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.684 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.685 21:11:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:07.685 21:11:19 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:07.685 21:11:19 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:12:07.685 21:11:19 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:07.685 21:11:19 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:07.685 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:07.686 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:07.687 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.688 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:07.689 21:11:19 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:07.689 21:11:19 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:12:07.689 21:11:19 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:07.689 21:11:19 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.689 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.690 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:07.691 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:07.692 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.693 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:07.694 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:07.695 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:07.696 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:07.697 21:11:19 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:08.011 21:11:19 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:08.011 21:11:19 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:12:08.011 21:11:19 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:08.011 21:11:19 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:08.011 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:08.012 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:08.013 21:11:19 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:12:08.013 21:11:19 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:12:08.014 21:11:19 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:12:08.014 21:11:19 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:12:08.014 21:11:19 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:12:08.014 21:11:19 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:08.303 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:08.869 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:08.870 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:08.870 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:09.128 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:09.128 21:11:20 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:09.128 21:11:20 nvme_fdp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:09.128 21:11:20 nvme_fdp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:09.128 21:11:20 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:09.128 ************************************ 00:12:09.128 START TEST nvme_flexible_data_placement 00:12:09.128 ************************************ 00:12:09.128 21:11:20 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:09.386 Initializing NVMe Controllers 00:12:09.386 Attaching to 0000:00:13.0 00:12:09.386 Controller supports FDP Attached to 0000:00:13.0 00:12:09.386 Namespace ID: 1 Endurance Group ID: 1 00:12:09.386 Initialization complete. 00:12:09.386 00:12:09.386 ================================== 00:12:09.386 == FDP tests for Namespace: #01 == 00:12:09.386 ================================== 00:12:09.386 00:12:09.386 Get Feature: FDP: 00:12:09.386 ================= 00:12:09.386 Enabled: Yes 00:12:09.386 FDP configuration Index: 0 00:12:09.386 00:12:09.386 FDP configurations log page 00:12:09.386 =========================== 00:12:09.386 Number of FDP configurations: 1 00:12:09.386 Version: 0 00:12:09.386 Size: 112 00:12:09.386 FDP Configuration Descriptor: 0 00:12:09.386 Descriptor Size: 96 00:12:09.386 Reclaim Group Identifier format: 2 00:12:09.386 FDP Volatile Write Cache: Not Present 00:12:09.386 FDP Configuration: Valid 00:12:09.386 Vendor Specific Size: 0 00:12:09.386 Number of Reclaim Groups: 2 00:12:09.386 Number of Recalim Unit Handles: 8 00:12:09.386 Max Placement Identifiers: 128 00:12:09.386 Number of Namespaces Suppprted: 256 00:12:09.386 Reclaim unit Nominal Size: 6000000 bytes 00:12:09.386 Estimated Reclaim Unit Time Limit: Not Reported 00:12:09.386 RUH Desc #000: RUH Type: Initially Isolated 00:12:09.386 RUH Desc #001: RUH Type: Initially Isolated 00:12:09.386 RUH Desc #002: RUH Type: Initially Isolated 00:12:09.386 RUH Desc #003: RUH Type: Initially Isolated 00:12:09.386 RUH Desc #004: RUH Type: Initially Isolated 00:12:09.386 RUH Desc #005: RUH Type: Initially Isolated 00:12:09.386 RUH Desc #006: RUH Type: Initially Isolated 00:12:09.386 RUH Desc #007: RUH Type: Initially Isolated 00:12:09.386 00:12:09.386 FDP reclaim unit handle usage log page 00:12:09.386 ====================================== 00:12:09.386 Number of Reclaim Unit Handles: 8 00:12:09.386 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:09.386 RUH Usage Desc #001: RUH Attributes: Unused 00:12:09.386 RUH Usage Desc #002: RUH Attributes: Unused 00:12:09.386 RUH Usage Desc #003: RUH Attributes: Unused 00:12:09.386 RUH Usage Desc #004: RUH Attributes: Unused 00:12:09.386 RUH Usage Desc #005: RUH Attributes: Unused 00:12:09.386 RUH Usage Desc #006: RUH Attributes: Unused 00:12:09.386 RUH Usage Desc #007: RUH Attributes: Unused 00:12:09.386 00:12:09.386 FDP statistics log page 00:12:09.386 ======================= 00:12:09.386 Host bytes with metadata written: 864677888 00:12:09.386 Media bytes with metadata written: 864919552 00:12:09.386 Media bytes erased: 0 00:12:09.386 00:12:09.386 FDP Reclaim unit handle status 00:12:09.386 ============================== 00:12:09.386 Number of RUHS descriptors: 2 00:12:09.386 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000002761 00:12:09.386 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:12:09.386 00:12:09.386 FDP write on placement id: 0 success 00:12:09.386 00:12:09.386 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:12:09.386 00:12:09.386 IO mgmt send: RUH update for Placement ID: #0 Success 00:12:09.386 00:12:09.386 Get Feature: FDP Events for Placement handle: #0 00:12:09.386 ======================== 00:12:09.386 Number of FDP Events: 6 00:12:09.386 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:12:09.386 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:12:09.386 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:12:09.386 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:12:09.386 FDP Event: #4 Type: Media Reallocated Enabled: No 00:12:09.386 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:12:09.386 00:12:09.386 FDP events log page 00:12:09.386 =================== 00:12:09.386 Number of FDP events: 1 00:12:09.386 FDP Event #0: 00:12:09.386 Event Type: RU Not Written to Capacity 00:12:09.386 Placement Identifier: Valid 00:12:09.386 NSID: Valid 00:12:09.386 Location: Valid 00:12:09.386 Placement Identifier: 0 00:12:09.386 Event Timestamp: 9 00:12:09.386 Namespace Identifier: 1 00:12:09.386 Reclaim Group Identifier: 0 00:12:09.386 Reclaim Unit Handle Identifier: 0 00:12:09.386 00:12:09.386 FDP test passed 00:12:09.386 00:12:09.386 real 0m0.283s 00:12:09.386 user 0m0.108s 00:12:09.386 sys 0m0.074s 00:12:09.386 21:11:20 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:09.387 ************************************ 00:12:09.387 END TEST nvme_flexible_data_placement 00:12:09.387 21:11:20 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:12:09.387 ************************************ 00:12:09.387 21:11:20 nvme_fdp -- common/autotest_common.sh@1142 -- # return 0 00:12:09.387 00:12:09.387 real 0m8.042s 00:12:09.387 user 0m1.266s 00:12:09.387 sys 0m1.703s 00:12:09.387 21:11:20 nvme_fdp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:09.387 21:11:20 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:09.387 ************************************ 00:12:09.387 END TEST nvme_fdp 00:12:09.387 ************************************ 00:12:09.646 21:11:20 -- common/autotest_common.sh@1142 -- # return 0 00:12:09.646 21:11:20 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:12:09.646 21:11:20 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:09.646 21:11:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:09.646 21:11:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:09.646 21:11:20 -- common/autotest_common.sh@10 -- # set +x 00:12:09.646 ************************************ 00:12:09.646 START TEST nvme_rpc 00:12:09.646 ************************************ 00:12:09.646 21:11:20 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:09.646 * Looking for test storage... 00:12:09.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:09.646 21:11:21 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:09.646 21:11:21 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:12:09.646 21:11:21 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:12:09.646 21:11:21 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:12:09.646 21:11:21 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:12:09.646 21:11:21 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:12:09.646 21:11:21 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:12:09.646 21:11:21 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:12:09.646 21:11:21 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:09.646 21:11:21 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:09.646 21:11:21 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:12:09.646 21:11:21 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:12:09.646 21:11:21 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:09.646 21:11:21 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:12:09.646 21:11:21 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:12:09.646 21:11:21 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=71993 00:12:09.646 21:11:21 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:09.646 21:11:21 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:12:09.646 21:11:21 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 71993 00:12:09.646 21:11:21 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 71993 ']' 00:12:09.646 21:11:21 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.646 21:11:21 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:09.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.646 21:11:21 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.646 21:11:21 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:09.646 21:11:21 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.905 [2024-07-14 21:11:21.218602] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:09.905 [2024-07-14 21:11:21.219450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71993 ] 00:12:09.906 [2024-07-14 21:11:21.400625] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:10.165 [2024-07-14 21:11:21.637006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.165 [2024-07-14 21:11:21.637027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.100 21:11:22 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:11.100 21:11:22 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:12:11.100 21:11:22 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:12:11.100 Nvme0n1 00:12:11.359 21:11:22 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:12:11.359 21:11:22 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:12:11.617 request: 00:12:11.617 { 00:12:11.617 "bdev_name": "Nvme0n1", 00:12:11.617 "filename": "non_existing_file", 00:12:11.617 "method": "bdev_nvme_apply_firmware", 00:12:11.617 "req_id": 1 00:12:11.617 } 00:12:11.617 Got JSON-RPC error response 00:12:11.617 response: 00:12:11.617 { 00:12:11.617 "code": -32603, 00:12:11.617 "message": "open file failed." 00:12:11.617 } 00:12:11.617 21:11:22 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:12:11.617 21:11:22 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:12:11.617 21:11:22 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:12:11.880 21:11:23 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:11.880 21:11:23 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 71993 00:12:11.880 21:11:23 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 71993 ']' 00:12:11.880 21:11:23 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 71993 00:12:11.880 21:11:23 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:12:11.880 21:11:23 nvme_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:11.880 21:11:23 nvme_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71993 00:12:11.880 21:11:23 nvme_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:11.880 killing process with pid 71993 00:12:11.880 21:11:23 nvme_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:11.880 21:11:23 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71993' 00:12:11.880 21:11:23 nvme_rpc -- common/autotest_common.sh@967 -- # kill 71993 00:12:11.880 21:11:23 nvme_rpc -- common/autotest_common.sh@972 -- # wait 71993 00:12:13.783 00:12:13.783 real 0m4.240s 00:12:13.783 user 0m8.053s 00:12:13.783 sys 0m0.654s 00:12:13.783 21:11:25 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:13.783 ************************************ 00:12:13.783 END TEST nvme_rpc 00:12:13.783 ************************************ 00:12:13.783 21:11:25 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.783 21:11:25 -- common/autotest_common.sh@1142 -- # return 0 00:12:13.783 21:11:25 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:13.783 21:11:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:13.783 21:11:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:13.783 21:11:25 -- common/autotest_common.sh@10 -- # set +x 00:12:13.783 ************************************ 00:12:13.783 START TEST nvme_rpc_timeouts 00:12:13.783 ************************************ 00:12:13.783 21:11:25 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:13.783 * Looking for test storage... 00:12:14.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:14.042 21:11:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:14.042 21:11:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_72065 00:12:14.042 21:11:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_72065 00:12:14.042 21:11:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=72089 00:12:14.042 21:11:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:14.042 21:11:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:12:14.042 21:11:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 72089 00:12:14.042 21:11:25 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 72089 ']' 00:12:14.042 21:11:25 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.042 21:11:25 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:14.042 21:11:25 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.042 21:11:25 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:14.042 21:11:25 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:14.042 [2024-07-14 21:11:25.449747] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:14.042 [2024-07-14 21:11:25.449964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72089 ] 00:12:14.301 [2024-07-14 21:11:25.622446] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:14.301 [2024-07-14 21:11:25.824587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.301 [2024-07-14 21:11:25.824589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.239 21:11:26 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:15.239 21:11:26 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:12:15.239 Checking default timeout settings: 00:12:15.239 21:11:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:12:15.239 21:11:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:15.499 Making settings changes with rpc: 00:12:15.499 21:11:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:12:15.499 21:11:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:12:15.758 Check default vs. modified settings: 00:12:15.758 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:12:15.758 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_72065 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_72065 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:12:16.019 Setting action_on_timeout is changed as expected. 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_72065 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_72065 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:12:16.019 Setting timeout_us is changed as expected. 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_72065 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:16.019 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:16.278 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_72065 00:12:16.278 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:16.278 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:16.278 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:12:16.278 Setting timeout_admin_us is changed as expected. 00:12:16.278 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:12:16.278 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:12:16.278 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:12:16.278 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_72065 /tmp/settings_modified_72065 00:12:16.278 21:11:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 72089 00:12:16.278 21:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 72089 ']' 00:12:16.278 21:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 72089 00:12:16.278 21:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:12:16.278 21:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:16.278 21:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72089 00:12:16.279 21:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:16.279 21:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:16.279 killing process with pid 72089 00:12:16.279 21:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72089' 00:12:16.279 21:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 72089 00:12:16.279 21:11:27 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 72089 00:12:18.187 RPC TIMEOUT SETTING TEST PASSED. 00:12:18.187 21:11:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:12:18.187 00:12:18.187 real 0m4.286s 00:12:18.187 user 0m8.185s 00:12:18.187 sys 0m0.590s 00:12:18.187 21:11:29 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:18.187 ************************************ 00:12:18.187 END TEST nvme_rpc_timeouts 00:12:18.187 ************************************ 00:12:18.187 21:11:29 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:18.187 21:11:29 -- common/autotest_common.sh@1142 -- # return 0 00:12:18.187 21:11:29 -- spdk/autotest.sh@243 -- # uname -s 00:12:18.187 21:11:29 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:12:18.187 21:11:29 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:18.187 21:11:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:18.187 21:11:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.187 21:11:29 -- common/autotest_common.sh@10 -- # set +x 00:12:18.187 ************************************ 00:12:18.187 START TEST sw_hotplug 00:12:18.187 ************************************ 00:12:18.187 21:11:29 sw_hotplug -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:18.187 * Looking for test storage... 00:12:18.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:18.187 21:11:29 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:18.756 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:18.756 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:18.756 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:18.756 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:18.756 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:18.756 21:11:30 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:12:18.756 21:11:30 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:12:18.756 21:11:30 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:12:18.756 21:11:30 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@230 -- # local class 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:12:18.756 21:11:30 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:12:18.757 21:11:30 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:18.757 21:11:30 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:12:18.757 21:11:30 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:12:18.757 21:11:30 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:19.325 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:19.325 Waiting for block devices as requested 00:12:19.325 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:19.585 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:19.585 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:19.585 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:24.859 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:24.859 21:11:36 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:12:24.859 21:11:36 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:25.117 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:12:25.117 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:25.117 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:12:25.687 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:12:25.687 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:25.687 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:25.946 21:11:37 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:12:25.946 21:11:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:25.946 21:11:37 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:12:25.946 21:11:37 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:12:25.946 21:11:37 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=72963 00:12:25.946 21:11:37 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:12:25.946 21:11:37 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:25.946 21:11:37 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:12:25.946 21:11:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:12:25.946 21:11:37 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:12:25.946 21:11:37 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:12:25.946 21:11:37 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:12:25.946 21:11:37 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:12:25.946 21:11:37 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 false 00:12:25.946 21:11:37 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:25.946 21:11:37 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:25.946 21:11:37 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:12:25.946 21:11:37 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:25.946 21:11:37 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:26.205 Initializing NVMe Controllers 00:12:26.205 Attaching to 0000:00:10.0 00:12:26.205 Attaching to 0000:00:11.0 00:12:26.205 Attached to 0000:00:10.0 00:12:26.205 Attached to 0000:00:11.0 00:12:26.205 Initialization complete. Starting I/O... 00:12:26.205 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:12:26.205 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:12:26.205 00:12:27.141 QEMU NVMe Ctrl (12340 ): 1359 I/Os completed (+1359) 00:12:27.141 QEMU NVMe Ctrl (12341 ): 1343 I/Os completed (+1343) 00:12:27.141 00:12:28.539 QEMU NVMe Ctrl (12340 ): 3114 I/Os completed (+1755) 00:12:28.539 QEMU NVMe Ctrl (12341 ): 3065 I/Os completed (+1722) 00:12:28.539 00:12:29.105 QEMU NVMe Ctrl (12340 ): 5077 I/Os completed (+1963) 00:12:29.105 QEMU NVMe Ctrl (12341 ): 5003 I/Os completed (+1938) 00:12:29.105 00:12:30.480 QEMU NVMe Ctrl (12340 ): 6916 I/Os completed (+1839) 00:12:30.480 QEMU NVMe Ctrl (12341 ): 6880 I/Os completed (+1877) 00:12:30.480 00:12:31.413 QEMU NVMe Ctrl (12340 ): 8708 I/Os completed (+1792) 00:12:31.413 QEMU NVMe Ctrl (12341 ): 8767 I/Os completed (+1887) 00:12:31.413 00:12:31.979 21:11:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:31.979 21:11:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:31.979 21:11:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:31.979 [2024-07-14 21:11:43.413358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:31.979 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:31.979 [2024-07-14 21:11:43.415240] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.979 [2024-07-14 21:11:43.415303] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.979 [2024-07-14 21:11:43.415332] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.979 [2024-07-14 21:11:43.415357] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.979 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:31.979 [2024-07-14 21:11:43.418279] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.979 [2024-07-14 21:11:43.418347] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.979 [2024-07-14 21:11:43.418371] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.979 [2024-07-14 21:11:43.418392] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.979 21:11:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:31.979 21:11:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:31.979 [2024-07-14 21:11:43.447452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:31.979 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:31.979 [2024-07-14 21:11:43.449204] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.979 [2024-07-14 21:11:43.449264] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.979 [2024-07-14 21:11:43.449296] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.979 [2024-07-14 21:11:43.449319] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.979 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:31.979 [2024-07-14 21:11:43.451807] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.979 [2024-07-14 21:11:43.451853] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.979 [2024-07-14 21:11:43.451879] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.979 [2024-07-14 21:11:43.451898] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.979 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:31.979 EAL: Scan for (pci) bus failed. 00:12:31.979 21:11:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:31.979 21:11:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:32.235 21:11:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:32.235 21:11:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:32.235 21:11:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:32.235 21:11:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:32.235 00:12:32.235 21:11:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:32.235 21:11:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:32.235 21:11:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:32.235 21:11:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:32.235 Attaching to 0000:00:10.0 00:12:32.235 Attached to 0000:00:10.0 00:12:32.235 21:11:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:32.235 21:11:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:32.235 21:11:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:32.235 Attaching to 0000:00:11.0 00:12:32.235 Attached to 0000:00:11.0 00:12:33.165 QEMU NVMe Ctrl (12340 ): 1888 I/Os completed (+1888) 00:12:33.165 QEMU NVMe Ctrl (12341 ): 1776 I/Os completed (+1776) 00:12:33.165 00:12:34.538 QEMU NVMe Ctrl (12340 ): 3849 I/Os completed (+1961) 00:12:34.538 QEMU NVMe Ctrl (12341 ): 3744 I/Os completed (+1968) 00:12:34.538 00:12:35.472 QEMU NVMe Ctrl (12340 ): 5815 I/Os completed (+1966) 00:12:35.472 QEMU NVMe Ctrl (12341 ): 5739 I/Os completed (+1995) 00:12:35.472 00:12:36.407 QEMU NVMe Ctrl (12340 ): 7575 I/Os completed (+1760) 00:12:36.407 QEMU NVMe Ctrl (12341 ): 7594 I/Os completed (+1855) 00:12:36.407 00:12:37.340 QEMU NVMe Ctrl (12340 ): 9533 I/Os completed (+1958) 00:12:37.340 QEMU NVMe Ctrl (12341 ): 9600 I/Os completed (+2006) 00:12:37.340 00:12:38.272 QEMU NVMe Ctrl (12340 ): 11324 I/Os completed (+1791) 00:12:38.272 QEMU NVMe Ctrl (12341 ): 11489 I/Os completed (+1889) 00:12:38.272 00:12:39.202 QEMU NVMe Ctrl (12340 ): 13144 I/Os completed (+1820) 00:12:39.202 QEMU NVMe Ctrl (12341 ): 13357 I/Os completed (+1868) 00:12:39.202 00:12:40.135 QEMU NVMe Ctrl (12340 ): 14748 I/Os completed (+1604) 00:12:40.135 QEMU NVMe Ctrl (12341 ): 15052 I/Os completed (+1695) 00:12:40.135 00:12:41.511 QEMU NVMe Ctrl (12340 ): 16568 I/Os completed (+1820) 00:12:41.511 QEMU NVMe Ctrl (12341 ): 16929 I/Os completed (+1877) 00:12:41.511 00:12:42.446 QEMU NVMe Ctrl (12340 ): 18333 I/Os completed (+1765) 00:12:42.446 QEMU NVMe Ctrl (12341 ): 18709 I/Os completed (+1780) 00:12:42.446 00:12:43.381 QEMU NVMe Ctrl (12340 ): 20170 I/Os completed (+1837) 00:12:43.381 QEMU NVMe Ctrl (12341 ): 20557 I/Os completed (+1848) 00:12:43.381 00:12:44.342 QEMU NVMe Ctrl (12340 ): 21813 I/Os completed (+1643) 00:12:44.343 QEMU NVMe Ctrl (12341 ): 22292 I/Os completed (+1735) 00:12:44.343 00:12:44.343 21:11:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:44.343 21:11:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:44.343 21:11:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:44.343 21:11:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:44.343 [2024-07-14 21:11:55.740117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:44.343 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:44.343 [2024-07-14 21:11:55.742035] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.343 [2024-07-14 21:11:55.742105] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.343 [2024-07-14 21:11:55.742134] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.343 [2024-07-14 21:11:55.742160] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.343 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:44.343 [2024-07-14 21:11:55.744952] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.343 [2024-07-14 21:11:55.745012] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.343 [2024-07-14 21:11:55.745038] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.343 [2024-07-14 21:11:55.745060] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.343 21:11:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:44.343 21:11:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:44.343 [2024-07-14 21:11:55.769823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:44.343 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:44.343 [2024-07-14 21:11:55.771515] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.343 [2024-07-14 21:11:55.771569] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.343 [2024-07-14 21:11:55.771601] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.343 [2024-07-14 21:11:55.771624] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.343 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:44.343 [2024-07-14 21:11:55.774127] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.343 [2024-07-14 21:11:55.774175] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.343 [2024-07-14 21:11:55.774201] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.343 [2024-07-14 21:11:55.774223] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.343 21:11:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:44.343 21:11:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:44.343 21:11:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:44.343 21:11:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:44.343 21:11:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:44.615 21:11:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:44.615 21:11:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:44.615 21:11:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:44.615 21:11:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:44.615 21:11:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:44.615 Attaching to 0000:00:10.0 00:12:44.615 Attached to 0000:00:10.0 00:12:44.615 21:11:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:44.615 21:11:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:44.615 21:11:56 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:44.615 Attaching to 0000:00:11.0 00:12:44.615 Attached to 0000:00:11.0 00:12:45.181 QEMU NVMe Ctrl (12340 ): 1189 I/Os completed (+1189) 00:12:45.181 QEMU NVMe Ctrl (12341 ): 1054 I/Os completed (+1054) 00:12:45.181 00:12:46.115 QEMU NVMe Ctrl (12340 ): 2785 I/Os completed (+1596) 00:12:46.115 QEMU NVMe Ctrl (12341 ): 2800 I/Os completed (+1746) 00:12:46.115 00:12:47.491 QEMU NVMe Ctrl (12340 ): 4549 I/Os completed (+1764) 00:12:47.491 QEMU NVMe Ctrl (12341 ): 4643 I/Os completed (+1843) 00:12:47.491 00:12:48.426 QEMU NVMe Ctrl (12340 ): 6382 I/Os completed (+1833) 00:12:48.426 QEMU NVMe Ctrl (12341 ): 6546 I/Os completed (+1903) 00:12:48.426 00:12:49.361 QEMU NVMe Ctrl (12340 ): 8258 I/Os completed (+1876) 00:12:49.361 QEMU NVMe Ctrl (12341 ): 8450 I/Os completed (+1904) 00:12:49.361 00:12:50.297 QEMU NVMe Ctrl (12340 ): 10170 I/Os completed (+1912) 00:12:50.297 QEMU NVMe Ctrl (12341 ): 10392 I/Os completed (+1942) 00:12:50.297 00:12:51.230 QEMU NVMe Ctrl (12340 ): 11940 I/Os completed (+1770) 00:12:51.230 QEMU NVMe Ctrl (12341 ): 12321 I/Os completed (+1929) 00:12:51.230 00:12:52.165 QEMU NVMe Ctrl (12340 ): 13828 I/Os completed (+1888) 00:12:52.165 QEMU NVMe Ctrl (12341 ): 14247 I/Os completed (+1926) 00:12:52.165 00:12:53.542 QEMU NVMe Ctrl (12340 ): 15507 I/Os completed (+1679) 00:12:53.542 QEMU NVMe Ctrl (12341 ): 16086 I/Os completed (+1839) 00:12:53.542 00:12:54.110 QEMU NVMe Ctrl (12340 ): 17283 I/Os completed (+1776) 00:12:54.110 QEMU NVMe Ctrl (12341 ): 17954 I/Os completed (+1868) 00:12:54.110 00:12:55.490 QEMU NVMe Ctrl (12340 ): 19195 I/Os completed (+1912) 00:12:55.490 QEMU NVMe Ctrl (12341 ): 19899 I/Os completed (+1945) 00:12:55.490 00:12:56.452 QEMU NVMe Ctrl (12340 ): 21032 I/Os completed (+1837) 00:12:56.452 QEMU NVMe Ctrl (12341 ): 21798 I/Os completed (+1899) 00:12:56.452 00:12:56.711 21:12:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:56.711 21:12:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:56.711 21:12:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:56.711 21:12:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:56.711 [2024-07-14 21:12:08.070005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:56.711 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:56.711 [2024-07-14 21:12:08.072259] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.711 [2024-07-14 21:12:08.072335] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.711 [2024-07-14 21:12:08.072390] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.711 [2024-07-14 21:12:08.072423] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.711 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:56.711 [2024-07-14 21:12:08.075976] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.711 [2024-07-14 21:12:08.076043] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.711 [2024-07-14 21:12:08.076072] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.711 [2024-07-14 21:12:08.076097] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.711 21:12:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:56.711 21:12:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:56.711 [2024-07-14 21:12:08.096203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:56.711 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:56.711 [2024-07-14 21:12:08.098259] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.711 [2024-07-14 21:12:08.098340] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.711 [2024-07-14 21:12:08.098377] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.711 [2024-07-14 21:12:08.098405] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.711 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:56.711 [2024-07-14 21:12:08.101379] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.711 [2024-07-14 21:12:08.101455] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.711 [2024-07-14 21:12:08.101489] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.711 [2024-07-14 21:12:08.101513] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.711 21:12:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:56.711 21:12:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:56.711 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:56.711 EAL: Scan for (pci) bus failed. 00:12:56.711 21:12:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:56.711 21:12:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:56.711 21:12:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:56.970 21:12:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:56.970 21:12:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:56.970 21:12:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:56.970 21:12:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:56.970 21:12:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:56.970 Attaching to 0000:00:10.0 00:12:56.970 Attached to 0000:00:10.0 00:12:56.970 21:12:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:56.970 21:12:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:56.970 21:12:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:56.970 Attaching to 0000:00:11.0 00:12:56.970 Attached to 0000:00:11.0 00:12:56.970 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:56.970 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:56.970 [2024-07-14 21:12:08.370718] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:13:09.177 21:12:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:09.177 21:12:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:09.177 21:12:20 sw_hotplug -- common/autotest_common.sh@715 -- # time=42.95 00:13:09.177 21:12:20 sw_hotplug -- common/autotest_common.sh@716 -- # echo 42.95 00:13:09.177 21:12:20 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:13:09.177 21:12:20 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.95 00:13:09.177 21:12:20 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.95 2 00:13:09.177 remove_attach_helper took 42.95s to complete (handling 2 nvme drive(s)) 21:12:20 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:13:15.738 21:12:26 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 72963 00:13:15.738 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (72963) - No such process 00:13:15.738 21:12:26 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 72963 00:13:15.738 21:12:26 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:13:15.738 21:12:26 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:13:15.738 21:12:26 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:13:15.738 21:12:26 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=73522 00:13:15.738 21:12:26 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:15.738 21:12:26 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:13:15.738 21:12:26 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 73522 00:13:15.738 21:12:26 sw_hotplug -- common/autotest_common.sh@829 -- # '[' -z 73522 ']' 00:13:15.738 21:12:26 sw_hotplug -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.738 21:12:26 sw_hotplug -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:15.738 21:12:26 sw_hotplug -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.738 21:12:26 sw_hotplug -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:15.738 21:12:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:15.738 [2024-07-14 21:12:26.489655] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:15.738 [2024-07-14 21:12:26.489856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73522 ] 00:13:15.738 [2024-07-14 21:12:26.663505] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.738 [2024-07-14 21:12:26.866526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.032 21:12:27 sw_hotplug -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.032 21:12:27 sw_hotplug -- common/autotest_common.sh@862 -- # return 0 00:13:16.032 21:12:27 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:16.032 21:12:27 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.032 21:12:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:16.032 21:12:27 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.032 21:12:27 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:13:16.032 21:12:27 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:16.032 21:12:27 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:16.032 21:12:27 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:13:16.032 21:12:27 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:13:16.032 21:12:27 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:13:16.032 21:12:27 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:13:16.032 21:12:27 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:13:16.032 21:12:27 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:16.032 21:12:27 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:16.032 21:12:27 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:16.032 21:12:27 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:16.032 21:12:27 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:22.596 21:12:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:22.596 21:12:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:22.596 21:12:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:22.596 21:12:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:22.596 21:12:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:22.596 21:12:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:22.596 21:12:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:22.596 21:12:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:22.596 21:12:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:22.596 21:12:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:22.596 21:12:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:22.596 21:12:33 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.596 21:12:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:22.596 21:12:33 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.596 [2024-07-14 21:12:33.600487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:22.596 [2024-07-14 21:12:33.603091] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.596 [2024-07-14 21:12:33.603160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.596 [2024-07-14 21:12:33.603201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.596 [2024-07-14 21:12:33.603230] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.596 [2024-07-14 21:12:33.603251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.596 [2024-07-14 21:12:33.603267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.596 [2024-07-14 21:12:33.603285] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.596 [2024-07-14 21:12:33.603300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.596 [2024-07-14 21:12:33.603316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.596 [2024-07-14 21:12:33.603346] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.596 [2024-07-14 21:12:33.603363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.596 [2024-07-14 21:12:33.603394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.596 21:12:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:22.596 21:12:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:22.596 [2024-07-14 21:12:34.000475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:22.596 [2024-07-14 21:12:34.003244] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.596 [2024-07-14 21:12:34.003325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.596 [2024-07-14 21:12:34.003346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.596 [2024-07-14 21:12:34.003372] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.596 [2024-07-14 21:12:34.003404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.596 [2024-07-14 21:12:34.003421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.596 [2024-07-14 21:12:34.003437] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.596 [2024-07-14 21:12:34.003453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.596 [2024-07-14 21:12:34.003483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.596 [2024-07-14 21:12:34.003500] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.596 [2024-07-14 21:12:34.003515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.596 [2024-07-14 21:12:34.003531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.596 21:12:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:22.596 21:12:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:22.596 21:12:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:22.596 21:12:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:22.596 21:12:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:22.596 21:12:34 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.596 21:12:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:22.596 21:12:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:22.855 21:12:34 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.855 21:12:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:22.855 21:12:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:22.855 21:12:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:22.855 21:12:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:22.855 21:12:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:22.855 21:12:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:22.855 21:12:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:22.855 21:12:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:22.855 21:12:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:22.855 21:12:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:23.113 21:12:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:23.113 21:12:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:23.113 21:12:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:35.321 21:12:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:35.321 21:12:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:35.321 21:12:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:35.321 21:12:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:35.321 21:12:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:35.321 21:12:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:35.321 21:12:46 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.321 21:12:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:35.321 21:12:46 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.321 21:12:46 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:35.321 21:12:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:35.321 21:12:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:35.321 21:12:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:35.321 21:12:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:35.321 21:12:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:35.321 21:12:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:35.321 21:12:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:35.321 21:12:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:35.321 [2024-07-14 21:12:46.601217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:35.321 21:12:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:35.321 21:12:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:35.321 21:12:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:35.321 21:12:46 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.321 21:12:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:35.321 [2024-07-14 21:12:46.603946] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.321 [2024-07-14 21:12:46.603991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.321 [2024-07-14 21:12:46.604017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.321 [2024-07-14 21:12:46.604045] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.321 [2024-07-14 21:12:46.604063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.321 [2024-07-14 21:12:46.604079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.321 [2024-07-14 21:12:46.604097] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.321 [2024-07-14 21:12:46.604117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.321 [2024-07-14 21:12:46.604136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.321 [2024-07-14 21:12:46.604152] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.321 [2024-07-14 21:12:46.604170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.321 [2024-07-14 21:12:46.604184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.321 21:12:46 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.321 21:12:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:35.321 21:12:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:35.579 [2024-07-14 21:12:47.001221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:35.579 [2024-07-14 21:12:47.003799] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.579 [2024-07-14 21:12:47.003892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.579 [2024-07-14 21:12:47.003914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.579 [2024-07-14 21:12:47.003943] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.579 [2024-07-14 21:12:47.003958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.579 [2024-07-14 21:12:47.003974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.579 [2024-07-14 21:12:47.003988] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.579 [2024-07-14 21:12:47.004004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.579 [2024-07-14 21:12:47.004017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.579 [2024-07-14 21:12:47.004065] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.579 [2024-07-14 21:12:47.004079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.579 [2024-07-14 21:12:47.004110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.837 21:12:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:35.837 21:12:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:35.837 21:12:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:35.837 21:12:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:35.837 21:12:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:35.837 21:12:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:35.837 21:12:47 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.837 21:12:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:35.837 21:12:47 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.837 21:12:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:35.837 21:12:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:35.837 21:12:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:35.837 21:12:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:35.837 21:12:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:36.095 21:12:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:36.095 21:12:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:36.095 21:12:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:36.095 21:12:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:36.095 21:12:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:36.096 21:12:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:36.096 21:12:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:36.096 21:12:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:48.299 21:12:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:48.299 21:12:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:48.299 21:12:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:48.299 21:12:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:48.299 21:12:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:48.299 21:12:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:48.299 21:12:59 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.299 21:12:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:48.299 21:12:59 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.299 21:12:59 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:48.299 21:12:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:48.299 21:12:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:48.299 21:12:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:48.299 21:12:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:48.299 21:12:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:48.299 [2024-07-14 21:12:59.601430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:48.299 [2024-07-14 21:12:59.604498] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.299 [2024-07-14 21:12:59.604548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.299 [2024-07-14 21:12:59.604573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.299 [2024-07-14 21:12:59.604601] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.299 [2024-07-14 21:12:59.604620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.299 [2024-07-14 21:12:59.604635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.299 [2024-07-14 21:12:59.604656] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.299 [2024-07-14 21:12:59.604672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.299 [2024-07-14 21:12:59.604688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.299 [2024-07-14 21:12:59.604703] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.299 [2024-07-14 21:12:59.604720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.299 [2024-07-14 21:12:59.604737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.299 21:12:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:48.299 21:12:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:48.299 21:12:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:48.299 21:12:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:48.299 21:12:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:48.299 21:12:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:48.299 21:12:59 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.299 21:12:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:48.299 21:12:59 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.299 21:12:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:48.299 21:12:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:48.558 [2024-07-14 21:13:00.001453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:48.558 [2024-07-14 21:13:00.004253] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.558 [2024-07-14 21:13:00.004311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.558 [2024-07-14 21:13:00.004333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.558 [2024-07-14 21:13:00.004372] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.558 [2024-07-14 21:13:00.004390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.558 [2024-07-14 21:13:00.004407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.558 [2024-07-14 21:13:00.004424] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.558 [2024-07-14 21:13:00.004441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.558 [2024-07-14 21:13:00.004455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.558 [2024-07-14 21:13:00.004478] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.558 [2024-07-14 21:13:00.004493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.558 [2024-07-14 21:13:00.004510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.817 21:13:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:48.817 21:13:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:48.817 21:13:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:48.817 21:13:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:48.817 21:13:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:48.817 21:13:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:48.817 21:13:00 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.817 21:13:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:48.817 21:13:00 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.817 21:13:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:48.817 21:13:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:48.817 21:13:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:48.817 21:13:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:48.817 21:13:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:49.076 21:13:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:49.076 21:13:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:49.076 21:13:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:49.076 21:13:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:49.076 21:13:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:49.076 21:13:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:49.076 21:13:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:49.076 21:13:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:01.355 21:13:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:01.355 21:13:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:01.355 21:13:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:01.355 21:13:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:01.355 21:13:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:01.355 21:13:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:01.355 21:13:12 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.355 21:13:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:01.355 21:13:12 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.355 21:13:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:01.355 21:13:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:01.355 21:13:12 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.08 00:14:01.355 21:13:12 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.08 00:14:01.355 21:13:12 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:14:01.355 21:13:12 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.08 00:14:01.355 21:13:12 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.08 2 00:14:01.355 remove_attach_helper took 45.08s to complete (handling 2 nvme drive(s)) 21:13:12 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:14:01.355 21:13:12 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.355 21:13:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:01.355 21:13:12 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.355 21:13:12 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:01.355 21:13:12 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.355 21:13:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:01.355 21:13:12 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.355 21:13:12 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:14:01.355 21:13:12 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:01.355 21:13:12 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:14:01.355 21:13:12 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:14:01.355 21:13:12 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:14:01.355 21:13:12 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:14:01.355 21:13:12 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:14:01.355 21:13:12 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:14:01.355 21:13:12 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:01.355 21:13:12 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:01.355 21:13:12 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:14:01.355 21:13:12 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:01.355 21:13:12 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:07.910 21:13:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:07.910 21:13:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:07.910 21:13:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:07.910 21:13:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:07.910 21:13:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:07.910 21:13:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:07.910 21:13:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:07.910 21:13:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:07.910 21:13:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:07.910 21:13:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:07.910 21:13:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:07.910 21:13:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.910 21:13:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:07.910 21:13:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.910 [2024-07-14 21:13:18.711764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:07.910 [2024-07-14 21:13:18.713913] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:07.910 [2024-07-14 21:13:18.713997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.910 [2024-07-14 21:13:18.714029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.910 [2024-07-14 21:13:18.714082] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:07.910 [2024-07-14 21:13:18.714109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.910 [2024-07-14 21:13:18.714125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.910 [2024-07-14 21:13:18.714143] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:07.910 [2024-07-14 21:13:18.714173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.910 [2024-07-14 21:13:18.714188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.910 [2024-07-14 21:13:18.714202] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:07.910 [2024-07-14 21:13:18.714217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.910 [2024-07-14 21:13:18.714245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.910 21:13:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:07.910 21:13:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:07.910 [2024-07-14 21:13:19.111717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:07.910 [2024-07-14 21:13:19.114148] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:07.910 [2024-07-14 21:13:19.114214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.910 [2024-07-14 21:13:19.114235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.910 [2024-07-14 21:13:19.114277] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:07.910 [2024-07-14 21:13:19.114308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.910 [2024-07-14 21:13:19.114325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.910 [2024-07-14 21:13:19.114340] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:07.910 [2024-07-14 21:13:19.114356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.910 [2024-07-14 21:13:19.114370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.910 [2024-07-14 21:13:19.114387] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:07.910 [2024-07-14 21:13:19.114401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.910 [2024-07-14 21:13:19.114419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.910 21:13:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:07.911 21:13:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:07.911 21:13:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:07.911 21:13:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:07.911 21:13:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:07.911 21:13:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:07.911 21:13:19 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.911 21:13:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:07.911 21:13:19 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.911 21:13:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:07.911 21:13:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:07.911 21:13:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:07.911 21:13:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:07.911 21:13:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:07.911 21:13:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:08.170 21:13:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:08.170 21:13:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:08.170 21:13:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:08.170 21:13:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:08.170 21:13:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:08.170 21:13:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:08.170 21:13:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:20.380 21:13:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:20.380 21:13:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:20.380 21:13:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:20.380 21:13:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:20.380 21:13:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:20.380 21:13:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:20.380 21:13:31 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.380 21:13:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:20.380 21:13:31 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.380 21:13:31 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:20.380 21:13:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:20.380 21:13:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:20.380 21:13:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:20.380 [2024-07-14 21:13:31.611843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:20.380 [2024-07-14 21:13:31.613996] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.380 [2024-07-14 21:13:31.614039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:20.380 [2024-07-14 21:13:31.614070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:20.380 [2024-07-14 21:13:31.614099] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.380 [2024-07-14 21:13:31.614118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:20.380 [2024-07-14 21:13:31.614133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:20.380 [2024-07-14 21:13:31.614150] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.380 [2024-07-14 21:13:31.614165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:20.380 [2024-07-14 21:13:31.614181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:20.380 [2024-07-14 21:13:31.614196] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.380 [2024-07-14 21:13:31.614212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:20.380 [2024-07-14 21:13:31.614227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:20.380 21:13:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:20.380 21:13:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:20.380 21:13:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:20.380 21:13:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:20.380 21:13:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:20.380 21:13:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:20.380 21:13:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:20.380 21:13:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:20.380 21:13:31 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.380 21:13:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:20.380 21:13:31 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.380 21:13:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:20.380 21:13:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:20.639 [2024-07-14 21:13:32.111857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:20.639 [2024-07-14 21:13:32.113552] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.639 [2024-07-14 21:13:32.113598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:20.639 [2024-07-14 21:13:32.113618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:20.639 [2024-07-14 21:13:32.113645] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.639 [2024-07-14 21:13:32.113660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:20.639 [2024-07-14 21:13:32.113677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:20.639 [2024-07-14 21:13:32.113691] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.639 [2024-07-14 21:13:32.113705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:20.639 [2024-07-14 21:13:32.113718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:20.639 [2024-07-14 21:13:32.113733] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.639 [2024-07-14 21:13:32.113745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:20.639 [2024-07-14 21:13:32.113759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:20.898 21:13:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:20.898 21:13:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:20.898 21:13:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:20.898 21:13:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:20.898 21:13:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:20.898 21:13:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:20.898 21:13:32 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.898 21:13:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:20.898 21:13:32 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.898 21:13:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:20.898 21:13:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:20.898 21:13:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:20.898 21:13:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:20.898 21:13:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:21.157 21:13:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:21.157 21:13:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:21.157 21:13:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:21.157 21:13:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:21.157 21:13:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:21.157 21:13:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:21.157 21:13:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:21.157 21:13:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:33.473 21:13:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:33.473 21:13:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:33.473 21:13:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:33.473 21:13:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:33.473 21:13:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:33.473 21:13:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:33.473 21:13:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.473 21:13:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:33.473 21:13:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.473 21:13:44 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:33.473 21:13:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:33.473 21:13:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:33.473 21:13:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:33.473 21:13:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:33.473 21:13:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:33.473 21:13:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:33.473 21:13:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:33.473 21:13:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:33.473 21:13:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:33.473 21:13:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:33.473 21:13:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:33.473 21:13:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.473 21:13:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:33.473 [2024-07-14 21:13:44.712042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:33.473 [2024-07-14 21:13:44.713974] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.473 [2024-07-14 21:13:44.714019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.473 [2024-07-14 21:13:44.714042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.473 [2024-07-14 21:13:44.714083] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.473 [2024-07-14 21:13:44.714100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.473 [2024-07-14 21:13:44.714113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.473 [2024-07-14 21:13:44.714145] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.473 [2024-07-14 21:13:44.714174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.473 [2024-07-14 21:13:44.714209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.473 [2024-07-14 21:13:44.714225] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.473 [2024-07-14 21:13:44.714240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.473 [2024-07-14 21:13:44.714254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.473 21:13:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.473 21:13:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:33.473 21:13:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:33.732 21:13:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:33.732 21:13:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:33.732 21:13:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:33.732 21:13:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:33.732 21:13:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:33.732 21:13:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:33.732 21:13:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.732 21:13:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:33.992 21:13:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.992 21:13:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:33.992 21:13:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:33.992 [2024-07-14 21:13:45.412064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:33.992 [2024-07-14 21:13:45.414474] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.992 [2024-07-14 21:13:45.414536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.992 [2024-07-14 21:13:45.414556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.992 [2024-07-14 21:13:45.414583] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.992 [2024-07-14 21:13:45.414597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.992 [2024-07-14 21:13:45.414612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.992 [2024-07-14 21:13:45.414626] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.992 [2024-07-14 21:13:45.414640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.992 [2024-07-14 21:13:45.414652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.992 [2024-07-14 21:13:45.414667] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.992 [2024-07-14 21:13:45.414680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.992 [2024-07-14 21:13:45.414696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.559 21:13:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:34.559 21:13:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:34.559 21:13:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:34.559 21:13:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:34.559 21:13:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:34.559 21:13:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:34.559 21:13:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.559 21:13:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:34.559 21:13:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.559 21:13:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:34.559 21:13:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:34.559 21:13:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:34.559 21:13:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:34.559 21:13:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:34.559 21:13:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:34.559 21:13:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:34.559 21:13:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:34.559 21:13:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:34.559 21:13:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:34.817 21:13:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:34.817 21:13:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:34.817 21:13:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:47.025 21:13:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:47.025 21:13:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:47.025 21:13:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:47.025 21:13:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:47.025 21:13:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:47.025 21:13:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:47.025 21:13:58 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.025 21:13:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:47.025 21:13:58 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.025 21:13:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:47.025 21:13:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:47.025 21:13:58 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.61 00:14:47.026 21:13:58 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.61 00:14:47.026 21:13:58 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:14:47.026 21:13:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.61 00:14:47.026 21:13:58 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.61 2 00:14:47.026 remove_attach_helper took 45.61s to complete (handling 2 nvme drive(s)) 21:13:58 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:14:47.026 21:13:58 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 73522 00:14:47.026 21:13:58 sw_hotplug -- common/autotest_common.sh@948 -- # '[' -z 73522 ']' 00:14:47.026 21:13:58 sw_hotplug -- common/autotest_common.sh@952 -- # kill -0 73522 00:14:47.026 21:13:58 sw_hotplug -- common/autotest_common.sh@953 -- # uname 00:14:47.026 21:13:58 sw_hotplug -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:47.026 21:13:58 sw_hotplug -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73522 00:14:47.026 killing process with pid 73522 00:14:47.026 21:13:58 sw_hotplug -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:47.026 21:13:58 sw_hotplug -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:47.026 21:13:58 sw_hotplug -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73522' 00:14:47.026 21:13:58 sw_hotplug -- common/autotest_common.sh@967 -- # kill 73522 00:14:47.026 21:13:58 sw_hotplug -- common/autotest_common.sh@972 -- # wait 73522 00:14:48.929 21:14:00 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:49.188 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:49.447 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:49.447 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:49.706 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:49.706 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:49.706 00:14:49.706 real 2m31.583s 00:14:49.706 user 1m52.114s 00:14:49.706 sys 0m19.124s 00:14:49.706 21:14:01 sw_hotplug -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:49.706 21:14:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:49.706 ************************************ 00:14:49.706 END TEST sw_hotplug 00:14:49.706 ************************************ 00:14:49.706 21:14:01 -- common/autotest_common.sh@1142 -- # return 0 00:14:49.706 21:14:01 -- spdk/autotest.sh@247 -- # [[ 1 -eq 1 ]] 00:14:49.707 21:14:01 -- spdk/autotest.sh@248 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:49.707 21:14:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:49.707 21:14:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:49.707 21:14:01 -- common/autotest_common.sh@10 -- # set +x 00:14:49.707 ************************************ 00:14:49.707 START TEST nvme_xnvme 00:14:49.707 ************************************ 00:14:49.707 21:14:01 nvme_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:49.966 * Looking for test storage... 00:14:49.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:49.966 21:14:01 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:49.966 21:14:01 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.966 21:14:01 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.966 21:14:01 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.966 21:14:01 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.966 21:14:01 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.966 21:14:01 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.966 21:14:01 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:49.966 21:14:01 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.966 21:14:01 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:14:49.966 21:14:01 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:49.966 21:14:01 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:49.966 21:14:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:49.966 ************************************ 00:14:49.966 START TEST xnvme_to_malloc_dd_copy 00:14:49.966 ************************************ 00:14:49.966 21:14:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1123 -- # malloc_to_xnvme_copy 00:14:49.966 21:14:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:14:49.966 21:14:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:14:49.966 21:14:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:14:49.966 21:14:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # return 00:14:49.966 21:14:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:14:49.966 21:14:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:14:49.966 21:14:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:49.966 21:14:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:14:49.966 21:14:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:14:49.966 21:14:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:14:49.966 21:14:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:14:49.966 21:14:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:14:49.966 21:14:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:14:49.966 21:14:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:14:49.966 21:14:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:14:49.966 21:14:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:14:49.966 21:14:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:49.966 21:14:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:49.966 21:14:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:49.966 21:14:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:49.966 21:14:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:49.966 21:14:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:49.966 21:14:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:49.966 21:14:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:49.966 { 00:14:49.966 "subsystems": [ 00:14:49.966 { 00:14:49.966 "subsystem": "bdev", 00:14:49.966 "config": [ 00:14:49.966 { 00:14:49.966 "params": { 00:14:49.966 "block_size": 512, 00:14:49.966 "num_blocks": 2097152, 00:14:49.966 "name": "malloc0" 00:14:49.966 }, 00:14:49.966 "method": "bdev_malloc_create" 00:14:49.966 }, 00:14:49.966 { 00:14:49.966 "params": { 00:14:49.966 "io_mechanism": "libaio", 00:14:49.966 "filename": "/dev/nullb0", 00:14:49.966 "name": "null0" 00:14:49.966 }, 00:14:49.966 "method": "bdev_xnvme_create" 00:14:49.966 }, 00:14:49.966 { 00:14:49.966 "method": "bdev_wait_for_examine" 00:14:49.966 } 00:14:49.966 ] 00:14:49.966 } 00:14:49.966 ] 00:14:49.966 } 00:14:49.966 [2024-07-14 21:14:01.439656] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:49.966 [2024-07-14 21:14:01.440052] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74871 ] 00:14:50.225 [2024-07-14 21:14:01.615996] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.490 [2024-07-14 21:14:01.844320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.161  Copying: 191/1024 [MB] (191 MBps) Copying: 383/1024 [MB] (191 MBps) Copying: 576/1024 [MB] (193 MBps) Copying: 768/1024 [MB] (192 MBps) Copying: 960/1024 [MB] (192 MBps) Copying: 1024/1024 [MB] (average 192 MBps) 00:15:00.161 00:15:00.161 21:14:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:15:00.161 21:14:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:15:00.161 21:14:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:00.161 21:14:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:00.161 { 00:15:00.161 "subsystems": [ 00:15:00.161 { 00:15:00.161 "subsystem": "bdev", 00:15:00.161 "config": [ 00:15:00.161 { 00:15:00.161 "params": { 00:15:00.161 "block_size": 512, 00:15:00.161 "num_blocks": 2097152, 00:15:00.161 "name": "malloc0" 00:15:00.161 }, 00:15:00.161 "method": "bdev_malloc_create" 00:15:00.161 }, 00:15:00.161 { 00:15:00.161 "params": { 00:15:00.161 "io_mechanism": "libaio", 00:15:00.161 "filename": "/dev/nullb0", 00:15:00.161 "name": "null0" 00:15:00.161 }, 00:15:00.161 "method": "bdev_xnvme_create" 00:15:00.161 }, 00:15:00.161 { 00:15:00.161 "method": "bdev_wait_for_examine" 00:15:00.161 } 00:15:00.162 ] 00:15:00.162 } 00:15:00.162 ] 00:15:00.162 } 00:15:00.162 [2024-07-14 21:14:11.541264] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:00.162 [2024-07-14 21:14:11.541420] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74986 ] 00:15:00.420 [2024-07-14 21:14:11.714889] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.420 [2024-07-14 21:14:11.887070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.405  Copying: 187/1024 [MB] (187 MBps) Copying: 374/1024 [MB] (186 MBps) Copying: 561/1024 [MB] (186 MBps) Copying: 747/1024 [MB] (186 MBps) Copying: 936/1024 [MB] (189 MBps) Copying: 1024/1024 [MB] (average 187 MBps) 00:15:10.405 00:15:10.405 21:14:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:15:10.405 21:14:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:15:10.405 21:14:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:15:10.405 21:14:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:15:10.405 21:14:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:10.405 21:14:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:10.405 { 00:15:10.405 "subsystems": [ 00:15:10.405 { 00:15:10.405 "subsystem": "bdev", 00:15:10.405 "config": [ 00:15:10.405 { 00:15:10.405 "params": { 00:15:10.405 "block_size": 512, 00:15:10.405 "num_blocks": 2097152, 00:15:10.405 "name": "malloc0" 00:15:10.405 }, 00:15:10.405 "method": "bdev_malloc_create" 00:15:10.405 }, 00:15:10.405 { 00:15:10.405 "params": { 00:15:10.405 "io_mechanism": "io_uring", 00:15:10.405 "filename": "/dev/nullb0", 00:15:10.405 "name": "null0" 00:15:10.405 }, 00:15:10.405 "method": "bdev_xnvme_create" 00:15:10.405 }, 00:15:10.405 { 00:15:10.405 "method": "bdev_wait_for_examine" 00:15:10.405 } 00:15:10.405 ] 00:15:10.405 } 00:15:10.405 ] 00:15:10.405 } 00:15:10.405 [2024-07-14 21:14:21.704580] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:10.405 [2024-07-14 21:14:21.704750] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75101 ] 00:15:10.405 [2024-07-14 21:14:21.871528] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.664 [2024-07-14 21:14:22.041876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.301  Copying: 201/1024 [MB] (201 MBps) Copying: 404/1024 [MB] (203 MBps) Copying: 608/1024 [MB] (203 MBps) Copying: 814/1024 [MB] (206 MBps) Copying: 1016/1024 [MB] (202 MBps) Copying: 1024/1024 [MB] (average 203 MBps) 00:15:20.301 00:15:20.301 21:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:15:20.301 21:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:15:20.301 21:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:20.301 21:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:20.301 { 00:15:20.301 "subsystems": [ 00:15:20.301 { 00:15:20.301 "subsystem": "bdev", 00:15:20.301 "config": [ 00:15:20.301 { 00:15:20.301 "params": { 00:15:20.302 "block_size": 512, 00:15:20.302 "num_blocks": 2097152, 00:15:20.302 "name": "malloc0" 00:15:20.302 }, 00:15:20.302 "method": "bdev_malloc_create" 00:15:20.302 }, 00:15:20.302 { 00:15:20.302 "params": { 00:15:20.302 "io_mechanism": "io_uring", 00:15:20.302 "filename": "/dev/nullb0", 00:15:20.302 "name": "null0" 00:15:20.302 }, 00:15:20.302 "method": "bdev_xnvme_create" 00:15:20.302 }, 00:15:20.302 { 00:15:20.302 "method": "bdev_wait_for_examine" 00:15:20.302 } 00:15:20.302 ] 00:15:20.302 } 00:15:20.302 ] 00:15:20.302 } 00:15:20.302 [2024-07-14 21:14:31.366015] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:20.302 [2024-07-14 21:14:31.366139] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75205 ] 00:15:20.302 [2024-07-14 21:14:31.526919] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.302 [2024-07-14 21:14:31.692965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.511  Copying: 205/1024 [MB] (205 MBps) Copying: 410/1024 [MB] (205 MBps) Copying: 617/1024 [MB] (206 MBps) Copying: 821/1024 [MB] (204 MBps) Copying: 1024/1024 [MB] (average 205 MBps) 00:15:29.511 00:15:29.511 21:14:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:15:29.511 21:14:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@195 -- # modprobe -r null_blk 00:15:29.511 ************************************ 00:15:29.511 END TEST xnvme_to_malloc_dd_copy 00:15:29.511 ************************************ 00:15:29.511 00:15:29.511 real 0m39.604s 00:15:29.511 user 0m34.627s 00:15:29.511 sys 0m4.433s 00:15:29.511 21:14:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:29.511 21:14:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:29.511 21:14:40 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:15:29.511 21:14:40 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:29.511 21:14:40 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:29.511 21:14:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:29.511 21:14:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:29.511 ************************************ 00:15:29.511 START TEST xnvme_bdevperf 00:15:29.511 ************************************ 00:15:29.511 21:14:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1123 -- # xnvme_bdevperf 00:15:29.511 21:14:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:15:29.511 21:14:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:15:29.511 21:14:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:15:29.511 21:14:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # return 00:15:29.511 21:14:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:15:29.511 21:14:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:15:29.511 21:14:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:15:29.511 21:14:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:15:29.511 21:14:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:15:29.511 21:14:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:15:29.511 21:14:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:15:29.511 21:14:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:15:29.511 21:14:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:15:29.511 21:14:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:15:29.511 21:14:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:15:29.511 21:14:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:15:29.511 21:14:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:15:29.511 21:14:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:15:29.511 21:14:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:29.511 21:14:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:29.511 { 00:15:29.511 "subsystems": [ 00:15:29.511 { 00:15:29.511 "subsystem": "bdev", 00:15:29.511 "config": [ 00:15:29.511 { 00:15:29.511 "params": { 00:15:29.511 "io_mechanism": "libaio", 00:15:29.511 "filename": "/dev/nullb0", 00:15:29.511 "name": "null0" 00:15:29.511 }, 00:15:29.511 "method": "bdev_xnvme_create" 00:15:29.511 }, 00:15:29.511 { 00:15:29.511 "method": "bdev_wait_for_examine" 00:15:29.511 } 00:15:29.511 ] 00:15:29.511 } 00:15:29.511 ] 00:15:29.511 } 00:15:29.770 [2024-07-14 21:14:41.095406] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:29.770 [2024-07-14 21:14:41.095595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75343 ] 00:15:29.770 [2024-07-14 21:14:41.271282] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.029 [2024-07-14 21:14:41.497500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.288 Running I/O for 5 seconds... 00:15:35.557 00:15:35.557 Latency(us) 00:15:35.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:35.557 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:35.557 null0 : 5.00 123280.26 481.56 0.00 0.00 516.08 148.01 822.92 00:15:35.557 =================================================================================================================== 00:15:35.557 Total : 123280.26 481.56 0.00 0.00 516.08 148.01 822.92 00:15:36.494 21:14:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:15:36.494 21:14:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:15:36.494 21:14:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:15:36.494 21:14:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:15:36.494 21:14:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:36.494 21:14:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:36.494 { 00:15:36.494 "subsystems": [ 00:15:36.494 { 00:15:36.494 "subsystem": "bdev", 00:15:36.494 "config": [ 00:15:36.494 { 00:15:36.494 "params": { 00:15:36.494 "io_mechanism": "io_uring", 00:15:36.494 "filename": "/dev/nullb0", 00:15:36.494 "name": "null0" 00:15:36.494 }, 00:15:36.494 "method": "bdev_xnvme_create" 00:15:36.494 }, 00:15:36.494 { 00:15:36.494 "method": "bdev_wait_for_examine" 00:15:36.494 } 00:15:36.494 ] 00:15:36.494 } 00:15:36.494 ] 00:15:36.494 } 00:15:36.494 [2024-07-14 21:14:47.887776] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:36.494 [2024-07-14 21:14:47.887948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75417 ] 00:15:36.753 [2024-07-14 21:14:48.045657] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.753 [2024-07-14 21:14:48.220395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.012 Running I/O for 5 seconds... 00:15:42.303 00:15:42.303 Latency(us) 00:15:42.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.303 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:42.303 null0 : 5.00 169194.24 660.92 0.00 0.00 375.29 200.15 666.53 00:15:42.303 =================================================================================================================== 00:15:42.303 Total : 169194.24 660.92 0.00 0.00 375.29 200.15 666.53 00:15:43.239 21:14:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:15:43.239 21:14:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@195 -- # modprobe -r null_blk 00:15:43.239 ************************************ 00:15:43.239 END TEST xnvme_bdevperf 00:15:43.239 ************************************ 00:15:43.239 00:15:43.239 real 0m13.554s 00:15:43.239 user 0m10.487s 00:15:43.239 sys 0m2.856s 00:15:43.239 21:14:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:43.239 21:14:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:43.239 21:14:54 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:15:43.239 ************************************ 00:15:43.239 END TEST nvme_xnvme 00:15:43.239 ************************************ 00:15:43.239 00:15:43.239 real 0m53.349s 00:15:43.239 user 0m45.175s 00:15:43.239 sys 0m7.406s 00:15:43.239 21:14:54 nvme_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:43.239 21:14:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.239 21:14:54 -- common/autotest_common.sh@1142 -- # return 0 00:15:43.239 21:14:54 -- spdk/autotest.sh@249 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:43.239 21:14:54 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:43.239 21:14:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:43.239 21:14:54 -- common/autotest_common.sh@10 -- # set +x 00:15:43.239 ************************************ 00:15:43.239 START TEST blockdev_xnvme 00:15:43.239 ************************************ 00:15:43.239 21:14:54 blockdev_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:43.239 * Looking for test storage... 00:15:43.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@674 -- # uname -s 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@682 -- # test_type=xnvme 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@684 -- # dek= 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == bdev ]] 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == crypto_* ]] 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=75557 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:43.239 21:14:54 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 75557 00:15:43.239 21:14:54 blockdev_xnvme -- common/autotest_common.sh@829 -- # '[' -z 75557 ']' 00:15:43.239 21:14:54 blockdev_xnvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.239 21:14:54 blockdev_xnvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:43.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.239 21:14:54 blockdev_xnvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.239 21:14:54 blockdev_xnvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:43.239 21:14:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.499 [2024-07-14 21:14:54.812595] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:43.499 [2024-07-14 21:14:54.812821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75557 ] 00:15:43.499 [2024-07-14 21:14:54.973744] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.758 [2024-07-14 21:14:55.149737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.326 21:14:55 blockdev_xnvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:44.326 21:14:55 blockdev_xnvme -- common/autotest_common.sh@862 -- # return 0 00:15:44.326 21:14:55 blockdev_xnvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:15:44.326 21:14:55 blockdev_xnvme -- bdev/blockdev.sh@729 -- # setup_xnvme_conf 00:15:44.326 21:14:55 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:44.326 21:14:55 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:44.326 21:14:55 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:44.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:44.843 Waiting for block devices as requested 00:15:44.843 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:45.102 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:45.102 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:45.102 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:50.373 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:50.373 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1670 -- # local nvme bdf 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:50.373 21:15:01 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:50.374 21:15:01 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.374 21:15:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:15:50.374 nvme0n1 00:15:50.374 nvme1n1 00:15:50.374 nvme2n1 00:15:50.374 nvme2n2 00:15:50.374 nvme2n3 00:15:50.374 nvme3n1 00:15:50.374 21:15:01 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:15:50.374 21:15:01 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.374 21:15:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.374 21:15:01 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@740 -- # cat 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:15:50.374 21:15:01 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.374 21:15:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.374 21:15:01 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:15:50.374 21:15:01 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.374 21:15:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.374 21:15:01 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:50.374 21:15:01 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.374 21:15:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.374 21:15:01 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:15:50.374 21:15:01 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.374 21:15:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.374 21:15:01 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:15:50.374 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "d709c0db-d07d-40ba-a8d8-500d6354d51f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d709c0db-d07d-40ba-a8d8-500d6354d51f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "a80f9780-1999-4cc3-9eec-233ec495bb19"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a80f9780-1999-4cc3-9eec-233ec495bb19",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "df06a19f-7134-4bc6-a999-0ed52bb37ce1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "df06a19f-7134-4bc6-a999-0ed52bb37ce1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "aa7a01dd-8a3f-4d7f-89fb-5c76ec07b26a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "aa7a01dd-8a3f-4d7f-89fb-5c76ec07b26a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "d192de0b-3126-4663-ab7c-519b82fcd5c3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d192de0b-3126-4663-ab7c-519b82fcd5c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "7b7f33e5-b987-4678-b9f0-7e904c319986"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7b7f33e5-b987-4678-b9f0-7e904c319986",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:50.639 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:15:50.639 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=nvme0n1 00:15:50.639 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:15:50.639 21:15:01 blockdev_xnvme -- bdev/blockdev.sh@754 -- # killprocess 75557 00:15:50.639 21:15:01 blockdev_xnvme -- common/autotest_common.sh@948 -- # '[' -z 75557 ']' 00:15:50.639 21:15:01 blockdev_xnvme -- common/autotest_common.sh@952 -- # kill -0 75557 00:15:50.639 21:15:01 blockdev_xnvme -- common/autotest_common.sh@953 -- # uname 00:15:50.639 21:15:01 blockdev_xnvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:50.639 21:15:01 blockdev_xnvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75557 00:15:50.639 killing process with pid 75557 00:15:50.639 21:15:01 blockdev_xnvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:50.639 21:15:01 blockdev_xnvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:50.639 21:15:01 blockdev_xnvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75557' 00:15:50.639 21:15:01 blockdev_xnvme -- common/autotest_common.sh@967 -- # kill 75557 00:15:50.639 21:15:01 blockdev_xnvme -- common/autotest_common.sh@972 -- # wait 75557 00:15:52.539 21:15:03 blockdev_xnvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:52.539 21:15:03 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:52.539 21:15:03 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:15:52.539 21:15:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:52.539 21:15:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:52.539 ************************************ 00:15:52.539 START TEST bdev_hello_world 00:15:52.539 ************************************ 00:15:52.539 21:15:03 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:52.539 [2024-07-14 21:15:04.048252] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:52.539 [2024-07-14 21:15:04.048418] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75922 ] 00:15:52.797 [2024-07-14 21:15:04.204685] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.055 [2024-07-14 21:15:04.377394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.313 [2024-07-14 21:15:04.739208] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:53.313 [2024-07-14 21:15:04.739260] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:53.313 [2024-07-14 21:15:04.739300] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:53.313 [2024-07-14 21:15:04.741476] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:53.313 [2024-07-14 21:15:04.741766] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:53.313 [2024-07-14 21:15:04.741789] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:53.313 [2024-07-14 21:15:04.741987] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:53.313 00:15:53.313 [2024-07-14 21:15:04.742016] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:54.251 ************************************ 00:15:54.251 END TEST bdev_hello_world 00:15:54.252 ************************************ 00:15:54.252 00:15:54.252 real 0m1.794s 00:15:54.252 user 0m1.520s 00:15:54.252 sys 0m0.161s 00:15:54.252 21:15:05 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:54.252 21:15:05 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:54.515 21:15:05 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:15:54.516 21:15:05 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:15:54.516 21:15:05 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:54.516 21:15:05 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:54.516 21:15:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:54.516 ************************************ 00:15:54.516 START TEST bdev_bounds 00:15:54.516 ************************************ 00:15:54.516 21:15:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:15:54.516 Process bdevio pid: 75962 00:15:54.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.516 21:15:05 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=75962 00:15:54.516 21:15:05 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:54.516 21:15:05 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 75962' 00:15:54.516 21:15:05 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:54.516 21:15:05 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 75962 00:15:54.516 21:15:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 75962 ']' 00:15:54.516 21:15:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.516 21:15:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:54.516 21:15:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.516 21:15:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:54.516 21:15:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:54.516 [2024-07-14 21:15:05.913246] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:54.516 [2024-07-14 21:15:05.913409] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75962 ] 00:15:54.775 [2024-07-14 21:15:06.084939] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:54.776 [2024-07-14 21:15:06.250484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.776 [2024-07-14 21:15:06.250609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.776 [2024-07-14 21:15:06.250637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.343 21:15:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:55.343 21:15:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:15:55.343 21:15:06 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:55.602 I/O targets: 00:15:55.602 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:55.602 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:55.602 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:55.602 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:55.602 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:55.602 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:55.602 00:15:55.602 00:15:55.602 CUnit - A unit testing framework for C - Version 2.1-3 00:15:55.602 http://cunit.sourceforge.net/ 00:15:55.602 00:15:55.602 00:15:55.602 Suite: bdevio tests on: nvme3n1 00:15:55.602 Test: blockdev write read block ...passed 00:15:55.602 Test: blockdev write zeroes read block ...passed 00:15:55.602 Test: blockdev write zeroes read no split ...passed 00:15:55.602 Test: blockdev write zeroes read split ...passed 00:15:55.602 Test: blockdev write zeroes read split partial ...passed 00:15:55.602 Test: blockdev reset ...passed 00:15:55.602 Test: blockdev write read 8 blocks ...passed 00:15:55.602 Test: blockdev write read size > 128k ...passed 00:15:55.602 Test: blockdev write read invalid size ...passed 00:15:55.602 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:55.602 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:55.602 Test: blockdev write read max offset ...passed 00:15:55.602 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:55.602 Test: blockdev writev readv 8 blocks ...passed 00:15:55.602 Test: blockdev writev readv 30 x 1block ...passed 00:15:55.602 Test: blockdev writev readv block ...passed 00:15:55.602 Test: blockdev writev readv size > 128k ...passed 00:15:55.602 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:55.602 Test: blockdev comparev and writev ...passed 00:15:55.602 Test: blockdev nvme passthru rw ...passed 00:15:55.602 Test: blockdev nvme passthru vendor specific ...passed 00:15:55.602 Test: blockdev nvme admin passthru ...passed 00:15:55.602 Test: blockdev copy ...passed 00:15:55.602 Suite: bdevio tests on: nvme2n3 00:15:55.602 Test: blockdev write read block ...passed 00:15:55.602 Test: blockdev write zeroes read block ...passed 00:15:55.602 Test: blockdev write zeroes read no split ...passed 00:15:55.602 Test: blockdev write zeroes read split ...passed 00:15:55.602 Test: blockdev write zeroes read split partial ...passed 00:15:55.602 Test: blockdev reset ...passed 00:15:55.602 Test: blockdev write read 8 blocks ...passed 00:15:55.602 Test: blockdev write read size > 128k ...passed 00:15:55.602 Test: blockdev write read invalid size ...passed 00:15:55.602 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:55.602 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:55.602 Test: blockdev write read max offset ...passed 00:15:55.602 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:55.602 Test: blockdev writev readv 8 blocks ...passed 00:15:55.602 Test: blockdev writev readv 30 x 1block ...passed 00:15:55.602 Test: blockdev writev readv block ...passed 00:15:55.602 Test: blockdev writev readv size > 128k ...passed 00:15:55.602 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:55.602 Test: blockdev comparev and writev ...passed 00:15:55.602 Test: blockdev nvme passthru rw ...passed 00:15:55.602 Test: blockdev nvme passthru vendor specific ...passed 00:15:55.602 Test: blockdev nvme admin passthru ...passed 00:15:55.602 Test: blockdev copy ...passed 00:15:55.602 Suite: bdevio tests on: nvme2n2 00:15:55.602 Test: blockdev write read block ...passed 00:15:55.602 Test: blockdev write zeroes read block ...passed 00:15:55.602 Test: blockdev write zeroes read no split ...passed 00:15:55.602 Test: blockdev write zeroes read split ...passed 00:15:55.602 Test: blockdev write zeroes read split partial ...passed 00:15:55.602 Test: blockdev reset ...passed 00:15:55.602 Test: blockdev write read 8 blocks ...passed 00:15:55.602 Test: blockdev write read size > 128k ...passed 00:15:55.602 Test: blockdev write read invalid size ...passed 00:15:55.602 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:55.602 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:55.602 Test: blockdev write read max offset ...passed 00:15:55.602 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:55.602 Test: blockdev writev readv 8 blocks ...passed 00:15:55.602 Test: blockdev writev readv 30 x 1block ...passed 00:15:55.602 Test: blockdev writev readv block ...passed 00:15:55.602 Test: blockdev writev readv size > 128k ...passed 00:15:55.602 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:55.602 Test: blockdev comparev and writev ...passed 00:15:55.602 Test: blockdev nvme passthru rw ...passed 00:15:55.602 Test: blockdev nvme passthru vendor specific ...passed 00:15:55.602 Test: blockdev nvme admin passthru ...passed 00:15:55.602 Test: blockdev copy ...passed 00:15:55.602 Suite: bdevio tests on: nvme2n1 00:15:55.602 Test: blockdev write read block ...passed 00:15:55.602 Test: blockdev write zeroes read block ...passed 00:15:55.602 Test: blockdev write zeroes read no split ...passed 00:15:55.861 Test: blockdev write zeroes read split ...passed 00:15:55.861 Test: blockdev write zeroes read split partial ...passed 00:15:55.861 Test: blockdev reset ...passed 00:15:55.861 Test: blockdev write read 8 blocks ...passed 00:15:55.861 Test: blockdev write read size > 128k ...passed 00:15:55.861 Test: blockdev write read invalid size ...passed 00:15:55.861 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:55.861 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:55.861 Test: blockdev write read max offset ...passed 00:15:55.861 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:55.861 Test: blockdev writev readv 8 blocks ...passed 00:15:55.861 Test: blockdev writev readv 30 x 1block ...passed 00:15:55.861 Test: blockdev writev readv block ...passed 00:15:55.861 Test: blockdev writev readv size > 128k ...passed 00:15:55.861 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:55.861 Test: blockdev comparev and writev ...passed 00:15:55.861 Test: blockdev nvme passthru rw ...passed 00:15:55.861 Test: blockdev nvme passthru vendor specific ...passed 00:15:55.861 Test: blockdev nvme admin passthru ...passed 00:15:55.861 Test: blockdev copy ...passed 00:15:55.861 Suite: bdevio tests on: nvme1n1 00:15:55.861 Test: blockdev write read block ...passed 00:15:55.861 Test: blockdev write zeroes read block ...passed 00:15:55.861 Test: blockdev write zeroes read no split ...passed 00:15:55.861 Test: blockdev write zeroes read split ...passed 00:15:55.861 Test: blockdev write zeroes read split partial ...passed 00:15:55.861 Test: blockdev reset ...passed 00:15:55.861 Test: blockdev write read 8 blocks ...passed 00:15:55.861 Test: blockdev write read size > 128k ...passed 00:15:55.861 Test: blockdev write read invalid size ...passed 00:15:55.861 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:55.861 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:55.861 Test: blockdev write read max offset ...passed 00:15:55.861 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:55.861 Test: blockdev writev readv 8 blocks ...passed 00:15:55.861 Test: blockdev writev readv 30 x 1block ...passed 00:15:55.861 Test: blockdev writev readv block ...passed 00:15:55.861 Test: blockdev writev readv size > 128k ...passed 00:15:55.861 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:55.861 Test: blockdev comparev and writev ...passed 00:15:55.861 Test: blockdev nvme passthru rw ...passed 00:15:55.861 Test: blockdev nvme passthru vendor specific ...passed 00:15:55.861 Test: blockdev nvme admin passthru ...passed 00:15:55.861 Test: blockdev copy ...passed 00:15:55.861 Suite: bdevio tests on: nvme0n1 00:15:55.861 Test: blockdev write read block ...passed 00:15:55.861 Test: blockdev write zeroes read block ...passed 00:15:55.861 Test: blockdev write zeroes read no split ...passed 00:15:55.861 Test: blockdev write zeroes read split ...passed 00:15:55.861 Test: blockdev write zeroes read split partial ...passed 00:15:55.861 Test: blockdev reset ...passed 00:15:55.861 Test: blockdev write read 8 blocks ...passed 00:15:55.861 Test: blockdev write read size > 128k ...passed 00:15:55.861 Test: blockdev write read invalid size ...passed 00:15:55.861 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:55.861 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:55.861 Test: blockdev write read max offset ...passed 00:15:55.861 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:55.861 Test: blockdev writev readv 8 blocks ...passed 00:15:55.861 Test: blockdev writev readv 30 x 1block ...passed 00:15:55.861 Test: blockdev writev readv block ...passed 00:15:55.861 Test: blockdev writev readv size > 128k ...passed 00:15:55.861 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:55.861 Test: blockdev comparev and writev ...passed 00:15:55.861 Test: blockdev nvme passthru rw ...passed 00:15:55.861 Test: blockdev nvme passthru vendor specific ...passed 00:15:55.861 Test: blockdev nvme admin passthru ...passed 00:15:55.861 Test: blockdev copy ...passed 00:15:55.861 00:15:55.861 Run Summary: Type Total Ran Passed Failed Inactive 00:15:55.861 suites 6 6 n/a 0 0 00:15:55.861 tests 138 138 138 0 0 00:15:55.861 asserts 780 780 780 0 n/a 00:15:55.861 00:15:55.861 Elapsed time = 1.071 seconds 00:15:55.861 0 00:15:55.861 21:15:07 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 75962 00:15:55.861 21:15:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 75962 ']' 00:15:55.861 21:15:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 75962 00:15:55.861 21:15:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:15:55.861 21:15:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:55.861 21:15:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75962 00:15:55.861 killing process with pid 75962 00:15:55.861 21:15:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:55.861 21:15:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:55.861 21:15:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75962' 00:15:55.861 21:15:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 75962 00:15:55.861 21:15:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 75962 00:15:57.261 ************************************ 00:15:57.261 END TEST bdev_bounds 00:15:57.261 ************************************ 00:15:57.261 21:15:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:15:57.261 00:15:57.261 real 0m2.584s 00:15:57.261 user 0m6.170s 00:15:57.261 sys 0m0.345s 00:15:57.261 21:15:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:57.261 21:15:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:57.261 21:15:08 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:15:57.261 21:15:08 blockdev_xnvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:15:57.261 21:15:08 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:57.261 21:15:08 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.261 21:15:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:57.261 ************************************ 00:15:57.261 START TEST bdev_nbd 00:15:57.261 ************************************ 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=76027 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 76027 /var/tmp/spdk-nbd.sock 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 76027 ']' 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:57.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:57.262 21:15:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:57.262 [2024-07-14 21:15:08.534366] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:57.262 [2024-07-14 21:15:08.534515] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.262 [2024-07-14 21:15:08.696997] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.520 [2024-07-14 21:15:08.872187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.088 21:15:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:58.088 21:15:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:15:58.088 21:15:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:15:58.088 21:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:58.088 21:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:58.088 21:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:58.088 21:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:15:58.088 21:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:58.088 21:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:58.088 21:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:58.088 21:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:58.088 21:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:58.088 21:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:58.088 21:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:58.088 21:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:58.347 21:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:58.347 21:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:58.347 21:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:58.347 21:15:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:15:58.347 21:15:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:58.347 21:15:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:58.347 21:15:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:58.347 21:15:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:15:58.347 21:15:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:58.347 21:15:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:58.347 21:15:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:58.347 21:15:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.347 1+0 records in 00:15:58.347 1+0 records out 00:15:58.347 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425277 s, 9.6 MB/s 00:15:58.347 21:15:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.347 21:15:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:58.348 21:15:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.348 21:15:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:58.348 21:15:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:58.348 21:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:58.348 21:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:58.348 21:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:58.606 21:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:58.606 21:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:58.606 21:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:58.606 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:15:58.606 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:58.606 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:58.606 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:58.606 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:15:58.606 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:58.606 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:58.606 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:58.606 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.606 1+0 records in 00:15:58.606 1+0 records out 00:15:58.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052605 s, 7.8 MB/s 00:15:58.606 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.606 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:58.606 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.606 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:58.606 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:58.606 21:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:58.606 21:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:58.606 21:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:58.866 21:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:58.866 21:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:58.866 21:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:58.866 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:15:58.866 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:58.866 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:58.866 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:58.866 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:15:58.866 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:58.866 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:58.866 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:58.866 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.866 1+0 records in 00:15:58.866 1+0 records out 00:15:58.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651518 s, 6.3 MB/s 00:15:58.866 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.866 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:58.866 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.866 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:58.866 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:58.866 21:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:58.866 21:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:58.866 21:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:15:59.125 21:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:59.125 21:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:59.125 21:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:59.125 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:15:59.125 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:59.125 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:59.125 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:59.125 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:15:59.125 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:59.125 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:59.125 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:59.125 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:59.125 1+0 records in 00:15:59.125 1+0 records out 00:15:59.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000599426 s, 6.8 MB/s 00:15:59.125 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.125 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:59.125 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.125 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:59.125 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:59.125 21:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:59.125 21:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:59.125 21:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:15:59.693 21:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:59.693 21:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:59.693 21:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:59.693 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:15:59.693 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:59.693 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:59.693 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:59.693 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:15:59.693 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:59.693 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:59.693 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:59.693 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:59.693 1+0 records in 00:15:59.693 1+0 records out 00:15:59.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000868607 s, 4.7 MB/s 00:15:59.693 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.693 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:59.693 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.693 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:59.693 21:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:59.693 21:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:59.693 21:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:59.693 21:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:59.693 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:59.693 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:59.693 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:59.693 21:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:15:59.693 21:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:59.693 21:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:59.693 21:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:59.693 21:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:15:59.693 21:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:59.693 21:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:59.693 21:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:59.694 21:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:59.694 1+0 records in 00:15:59.694 1+0 records out 00:15:59.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000775054 s, 5.3 MB/s 00:15:59.694 21:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.694 21:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:59.694 21:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.694 21:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:59.694 21:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:59.694 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:59.694 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:59.694 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:59.951 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:59.951 { 00:15:59.951 "nbd_device": "/dev/nbd0", 00:15:59.951 "bdev_name": "nvme0n1" 00:15:59.951 }, 00:15:59.951 { 00:15:59.951 "nbd_device": "/dev/nbd1", 00:15:59.951 "bdev_name": "nvme1n1" 00:15:59.951 }, 00:15:59.951 { 00:15:59.951 "nbd_device": "/dev/nbd2", 00:15:59.951 "bdev_name": "nvme2n1" 00:15:59.951 }, 00:15:59.951 { 00:15:59.951 "nbd_device": "/dev/nbd3", 00:15:59.951 "bdev_name": "nvme2n2" 00:15:59.951 }, 00:15:59.951 { 00:15:59.951 "nbd_device": "/dev/nbd4", 00:15:59.951 "bdev_name": "nvme2n3" 00:15:59.951 }, 00:15:59.951 { 00:15:59.951 "nbd_device": "/dev/nbd5", 00:15:59.951 "bdev_name": "nvme3n1" 00:15:59.951 } 00:15:59.951 ]' 00:15:59.951 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:59.951 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:59.951 { 00:15:59.951 "nbd_device": "/dev/nbd0", 00:15:59.951 "bdev_name": "nvme0n1" 00:15:59.951 }, 00:15:59.951 { 00:15:59.951 "nbd_device": "/dev/nbd1", 00:15:59.951 "bdev_name": "nvme1n1" 00:15:59.951 }, 00:15:59.951 { 00:15:59.951 "nbd_device": "/dev/nbd2", 00:15:59.951 "bdev_name": "nvme2n1" 00:15:59.951 }, 00:15:59.951 { 00:15:59.951 "nbd_device": "/dev/nbd3", 00:15:59.951 "bdev_name": "nvme2n2" 00:15:59.951 }, 00:15:59.951 { 00:15:59.951 "nbd_device": "/dev/nbd4", 00:15:59.951 "bdev_name": "nvme2n3" 00:15:59.951 }, 00:15:59.951 { 00:15:59.951 "nbd_device": "/dev/nbd5", 00:15:59.951 "bdev_name": "nvme3n1" 00:15:59.951 } 00:15:59.951 ]' 00:15:59.951 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:00.209 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:16:00.209 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:00.209 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:16:00.209 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:00.209 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:00.209 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.209 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:00.466 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:00.466 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:00.466 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:00.466 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.466 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.466 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:00.466 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:00.466 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.466 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.466 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:00.466 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:00.466 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:00.466 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:00.466 21:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.466 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.466 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:00.466 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:00.466 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.466 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.466 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:16:00.725 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:16:00.725 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:16:00.725 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:16:00.725 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.725 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.725 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:16:00.725 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:00.725 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.725 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.725 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:16:00.984 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:16:00.984 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:16:00.984 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:16:00.984 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.984 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.984 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:16:00.984 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:00.984 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.984 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.984 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:16:01.243 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:16:01.243 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:16:01.243 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:16:01.243 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:01.243 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:01.243 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:16:01.243 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:01.243 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:01.244 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:01.244 21:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:16:01.502 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:16:01.502 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:16:01.502 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:16:01.502 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:01.502 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:01.502 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:16:01.502 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:01.502 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:01.502 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:01.502 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:01.502 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:01.761 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:01.761 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:01.761 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:02.019 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:02.019 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:02.019 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:02.019 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:02.019 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:02.019 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:02.019 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:02.019 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:02.019 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:02.019 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:02.019 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:02.019 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:02.019 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:02.019 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:02.019 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:02.019 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:02.019 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:02.019 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:02.019 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:02.019 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:02.019 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:02.019 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:02.019 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:02.019 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:02.019 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:16:02.019 /dev/nbd0 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.278 1+0 records in 00:16:02.278 1+0 records out 00:16:02.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471177 s, 8.7 MB/s 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:16:02.278 /dev/nbd1 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:02.278 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:16:02.536 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:16:02.536 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:02.536 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:02.536 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.536 1+0 records in 00:16:02.536 1+0 records out 00:16:02.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000741951 s, 5.5 MB/s 00:16:02.536 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.536 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:16:02.536 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.536 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:02.536 21:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:16:02.537 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.537 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:02.537 21:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:16:02.537 /dev/nbd10 00:16:02.537 21:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:16:02.537 21:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:16:02.537 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:16:02.537 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:16:02.537 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:02.537 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:02.537 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:16:02.537 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:16:02.537 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:02.537 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:02.537 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.537 1+0 records in 00:16:02.537 1+0 records out 00:16:02.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00056991 s, 7.2 MB/s 00:16:02.537 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.537 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:16:02.537 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.537 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:02.537 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:16:02.537 21:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.537 21:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:02.537 21:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:16:02.795 /dev/nbd11 00:16:02.795 21:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:16:02.795 21:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:16:02.795 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:16:02.795 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:16:02.795 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:02.795 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:02.795 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:16:02.795 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:16:02.795 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:02.795 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:02.795 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.795 1+0 records in 00:16:02.795 1+0 records out 00:16:02.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000809883 s, 5.1 MB/s 00:16:02.795 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.795 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:16:02.795 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.795 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:02.795 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:16:02.795 21:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.795 21:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:02.795 21:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:16:03.054 /dev/nbd12 00:16:03.313 21:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:16:03.313 21:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:16:03.313 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:16:03.313 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:16:03.313 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:03.313 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:03.313 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:16:03.313 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:16:03.313 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:03.313 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:03.313 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:03.313 1+0 records in 00:16:03.313 1+0 records out 00:16:03.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000900197 s, 4.6 MB/s 00:16:03.313 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.313 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:16:03.313 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.313 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:03.313 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:16:03.313 21:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:03.313 21:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:03.313 21:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:16:03.572 /dev/nbd13 00:16:03.572 21:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:16:03.572 21:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:16:03.572 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:16:03.572 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:16:03.572 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:03.572 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:03.572 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:16:03.572 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:16:03.572 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:03.572 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:03.572 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:03.572 1+0 records in 00:16:03.572 1+0 records out 00:16:03.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606201 s, 6.8 MB/s 00:16:03.572 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.573 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:16:03.573 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.573 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:03.573 21:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:16:03.573 21:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:03.573 21:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:03.573 21:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:03.573 21:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:03.573 21:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:03.832 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:03.832 { 00:16:03.832 "nbd_device": "/dev/nbd0", 00:16:03.832 "bdev_name": "nvme0n1" 00:16:03.832 }, 00:16:03.832 { 00:16:03.832 "nbd_device": "/dev/nbd1", 00:16:03.832 "bdev_name": "nvme1n1" 00:16:03.832 }, 00:16:03.832 { 00:16:03.832 "nbd_device": "/dev/nbd10", 00:16:03.832 "bdev_name": "nvme2n1" 00:16:03.832 }, 00:16:03.832 { 00:16:03.832 "nbd_device": "/dev/nbd11", 00:16:03.832 "bdev_name": "nvme2n2" 00:16:03.832 }, 00:16:03.832 { 00:16:03.832 "nbd_device": "/dev/nbd12", 00:16:03.832 "bdev_name": "nvme2n3" 00:16:03.832 }, 00:16:03.832 { 00:16:03.832 "nbd_device": "/dev/nbd13", 00:16:03.832 "bdev_name": "nvme3n1" 00:16:03.832 } 00:16:03.832 ]' 00:16:03.832 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:03.832 { 00:16:03.832 "nbd_device": "/dev/nbd0", 00:16:03.832 "bdev_name": "nvme0n1" 00:16:03.832 }, 00:16:03.832 { 00:16:03.832 "nbd_device": "/dev/nbd1", 00:16:03.832 "bdev_name": "nvme1n1" 00:16:03.832 }, 00:16:03.832 { 00:16:03.832 "nbd_device": "/dev/nbd10", 00:16:03.832 "bdev_name": "nvme2n1" 00:16:03.832 }, 00:16:03.832 { 00:16:03.832 "nbd_device": "/dev/nbd11", 00:16:03.832 "bdev_name": "nvme2n2" 00:16:03.832 }, 00:16:03.832 { 00:16:03.832 "nbd_device": "/dev/nbd12", 00:16:03.832 "bdev_name": "nvme2n3" 00:16:03.832 }, 00:16:03.832 { 00:16:03.832 "nbd_device": "/dev/nbd13", 00:16:03.832 "bdev_name": "nvme3n1" 00:16:03.832 } 00:16:03.832 ]' 00:16:03.832 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:03.832 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:03.832 /dev/nbd1 00:16:03.832 /dev/nbd10 00:16:03.832 /dev/nbd11 00:16:03.832 /dev/nbd12 00:16:03.832 /dev/nbd13' 00:16:03.832 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:03.832 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:03.832 /dev/nbd1 00:16:03.832 /dev/nbd10 00:16:03.832 /dev/nbd11 00:16:03.832 /dev/nbd12 00:16:03.832 /dev/nbd13' 00:16:03.832 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:16:03.832 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:16:03.832 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:16:03.832 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:16:03.832 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:16:03.832 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:03.833 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:03.833 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:03.833 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:03.833 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:03.833 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:03.833 256+0 records in 00:16:03.833 256+0 records out 00:16:03.833 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010751 s, 97.5 MB/s 00:16:03.833 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:03.833 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:04.092 256+0 records in 00:16:04.092 256+0 records out 00:16:04.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.17045 s, 6.2 MB/s 00:16:04.092 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:04.092 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:04.351 256+0 records in 00:16:04.351 256+0 records out 00:16:04.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.178948 s, 5.9 MB/s 00:16:04.351 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:04.351 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:16:04.351 256+0 records in 00:16:04.351 256+0 records out 00:16:04.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170525 s, 6.1 MB/s 00:16:04.351 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:04.351 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:16:04.610 256+0 records in 00:16:04.610 256+0 records out 00:16:04.610 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147142 s, 7.1 MB/s 00:16:04.610 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:04.610 21:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:16:04.610 256+0 records in 00:16:04.610 256+0 records out 00:16:04.610 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165981 s, 6.3 MB/s 00:16:04.610 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:04.610 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:16:04.868 256+0 records in 00:16:04.868 256+0 records out 00:16:04.868 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147223 s, 7.1 MB/s 00:16:04.868 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:16:04.868 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:04.868 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:04.868 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:04.868 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:04.868 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:04.868 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:04.868 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.869 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:04.869 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.869 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:16:04.869 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.869 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:16:04.869 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.869 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:16:04.869 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.869 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:16:04.869 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.869 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:16:04.869 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:04.869 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:04.869 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:04.869 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:04.869 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:04.869 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:04.869 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.869 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:05.126 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:05.126 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:05.126 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:05.126 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.126 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.126 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:05.126 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:05.126 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.126 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.127 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:05.385 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:05.385 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:05.385 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:05.385 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.385 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.385 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:05.385 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:05.385 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.385 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.385 21:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:16:05.644 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:16:05.645 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:16:05.645 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:16:05.645 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.645 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.645 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:16:05.645 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:05.645 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.645 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.645 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:16:05.904 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:16:05.904 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:16:05.904 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:16:05.904 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.904 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.904 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:16:05.904 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:05.904 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.904 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.904 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:16:06.163 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:16:06.163 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:16:06.163 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:16:06.163 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.163 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.163 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:16:06.163 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:06.163 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.163 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.163 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:16:06.421 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:16:06.421 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:16:06.421 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:16:06.421 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.421 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.421 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:16:06.421 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:06.421 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.421 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:06.421 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:06.422 21:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:06.681 21:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:06.681 21:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:06.681 21:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:06.681 21:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:06.681 21:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:06.681 21:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:06.681 21:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:06.681 21:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:06.681 21:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:06.681 21:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:06.681 21:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:06.681 21:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:06.681 21:15:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:06.681 21:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:06.681 21:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:06.681 21:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:16:06.681 21:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:16:06.681 21:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:06.940 malloc_lvol_verify 00:16:06.940 21:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:07.268 f2deb0bc-33f1-441f-bbac-5ae1b2093a4f 00:16:07.268 21:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:07.542 248df805-043f-4f60-8646-336557880f3d 00:16:07.542 21:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:07.542 /dev/nbd0 00:16:07.542 21:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:16:07.542 mke2fs 1.46.5 (30-Dec-2021) 00:16:07.800 Discarding device blocks: 0/4096 done 00:16:07.800 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:07.800 00:16:07.801 Allocating group tables: 0/1 done 00:16:07.801 Writing inode tables: 0/1 done 00:16:07.801 Creating journal (1024 blocks): done 00:16:07.801 Writing superblocks and filesystem accounting information: 0/1 done 00:16:07.801 00:16:07.801 21:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:16:07.801 21:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:07.801 21:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:07.801 21:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:07.801 21:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:07.801 21:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:07.801 21:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.801 21:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:08.060 21:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:08.060 21:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:08.060 21:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:08.060 21:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:08.060 21:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:08.060 21:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:08.060 21:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:08.060 21:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:08.060 21:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:16:08.060 21:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:16:08.060 21:15:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 76027 00:16:08.060 21:15:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 76027 ']' 00:16:08.060 21:15:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 76027 00:16:08.060 21:15:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:16:08.060 21:15:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:08.060 21:15:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76027 00:16:08.060 21:15:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:08.060 21:15:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:08.060 killing process with pid 76027 00:16:08.060 21:15:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76027' 00:16:08.060 21:15:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 76027 00:16:08.060 21:15:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 76027 00:16:08.997 21:15:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:16:08.997 00:16:08.997 real 0m11.991s 00:16:08.997 user 0m16.844s 00:16:08.997 sys 0m3.947s 00:16:08.997 21:15:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:08.997 21:15:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:08.997 ************************************ 00:16:08.997 END TEST bdev_nbd 00:16:08.997 ************************************ 00:16:08.997 21:15:20 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:16:08.997 21:15:20 blockdev_xnvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:16:08.997 21:15:20 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = nvme ']' 00:16:08.997 21:15:20 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = gpt ']' 00:16:08.997 21:15:20 blockdev_xnvme -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:16:08.997 21:15:20 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:08.997 21:15:20 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:08.997 21:15:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:08.997 ************************************ 00:16:08.997 START TEST bdev_fio 00:16:08.997 ************************************ 00:16:08.997 21:15:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:16:08.997 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:16:08.997 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:08.997 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:08.997 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:08.997 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:16:08.997 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:16:08.997 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:16:08.997 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:08.997 21:15:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:08.997 21:15:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:16:08.997 21:15:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:16:08.997 21:15:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:16:08.997 21:15:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:16:08.997 21:15:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:08.997 21:15:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:16:08.997 21:15:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:16:08.997 21:15:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:08.997 21:15:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:16:08.997 21:15:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:16:08.997 21:15:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:16:08.997 21:15:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:16:08.997 21:15:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n1]' 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n1 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme1n1]' 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme1n1 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n1]' 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n1 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n2]' 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n2 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n3]' 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n3 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme3n1]' 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme3n1 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:09.257 ************************************ 00:16:09.257 START TEST bdev_fio_rw_verify 00:16:09.257 ************************************ 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:09.257 21:15:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:09.257 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:09.257 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:09.258 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:09.258 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:09.258 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:09.258 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:09.258 fio-3.35 00:16:09.258 Starting 6 threads 00:16:21.473 00:16:21.473 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=76438: Sun Jul 14 21:15:31 2024 00:16:21.473 read: IOPS=29.0k, BW=113MiB/s (119MB/s)(1135MiB/10001msec) 00:16:21.473 slat (usec): min=2, max=985, avg= 6.99, stdev= 4.40 00:16:21.473 clat (usec): min=94, max=4580, avg=638.40, stdev=228.31 00:16:21.473 lat (usec): min=100, max=4588, avg=645.39, stdev=229.02 00:16:21.473 clat percentiles (usec): 00:16:21.473 | 50.000th=[ 660], 99.000th=[ 1156], 99.900th=[ 1811], 99.990th=[ 3949], 00:16:21.473 | 99.999th=[ 4555] 00:16:21.473 write: IOPS=29.4k, BW=115MiB/s (120MB/s)(1147MiB/10001msec); 0 zone resets 00:16:21.473 slat (usec): min=13, max=2065, avg=26.62, stdev=26.20 00:16:21.473 clat (usec): min=83, max=8055, avg=727.66, stdev=249.46 00:16:21.473 lat (usec): min=101, max=8134, avg=754.29, stdev=251.56 00:16:21.473 clat percentiles (usec): 00:16:21.473 | 50.000th=[ 734], 99.000th=[ 1369], 99.900th=[ 2671], 99.990th=[ 4146], 00:16:21.473 | 99.999th=[ 7177] 00:16:21.473 bw ( KiB/s): min=99760, max=141363, per=100.00%, avg=117767.63, stdev=2400.36, samples=114 00:16:21.473 iops : min=24940, max=35340, avg=29441.58, stdev=600.06, samples=114 00:16:21.473 lat (usec) : 100=0.01%, 250=2.86%, 500=18.74%, 750=39.65%, 1000=32.23% 00:16:21.473 lat (msec) : 2=6.38%, 4=0.13%, 10=0.01% 00:16:21.473 cpu : usr=60.71%, sys=25.83%, ctx=8106, majf=0, minf=24724 00:16:21.473 IO depths : 1=11.9%, 2=24.3%, 4=50.7%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:21.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.473 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.473 issued rwts: total=290441,293707,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.473 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:21.473 00:16:21.473 Run status group 0 (all jobs): 00:16:21.473 READ: bw=113MiB/s (119MB/s), 113MiB/s-113MiB/s (119MB/s-119MB/s), io=1135MiB (1190MB), run=10001-10001msec 00:16:21.473 WRITE: bw=115MiB/s (120MB/s), 115MiB/s-115MiB/s (120MB/s-120MB/s), io=1147MiB (1203MB), run=10001-10001msec 00:16:21.473 ----------------------------------------------------- 00:16:21.473 Suppressions used: 00:16:21.473 count bytes template 00:16:21.473 6 48 /usr/src/fio/parse.c 00:16:21.473 3058 293568 /usr/src/fio/iolog.c 00:16:21.473 1 8 libtcmalloc_minimal.so 00:16:21.473 1 904 libcrypto.so 00:16:21.473 ----------------------------------------------------- 00:16:21.473 00:16:21.473 00:16:21.473 real 0m12.298s 00:16:21.473 user 0m38.230s 00:16:21.473 sys 0m15.847s 00:16:21.473 21:15:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:21.473 21:15:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:21.473 ************************************ 00:16:21.473 END TEST bdev_fio_rw_verify 00:16:21.473 ************************************ 00:16:21.473 21:15:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:16:21.473 21:15:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:16:21.473 21:15:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:21.473 21:15:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:21.473 21:15:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:21.473 21:15:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:16:21.473 21:15:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:16:21.473 21:15:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:16:21.473 21:15:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:16:21.473 21:15:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:21.473 21:15:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:16:21.473 21:15:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:16:21.473 21:15:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:21.473 21:15:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:16:21.473 21:15:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:16:21.473 21:15:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:16:21.473 21:15:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:16:21.473 21:15:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:21.473 21:15:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "d709c0db-d07d-40ba-a8d8-500d6354d51f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d709c0db-d07d-40ba-a8d8-500d6354d51f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "a80f9780-1999-4cc3-9eec-233ec495bb19"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a80f9780-1999-4cc3-9eec-233ec495bb19",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "df06a19f-7134-4bc6-a999-0ed52bb37ce1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "df06a19f-7134-4bc6-a999-0ed52bb37ce1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "aa7a01dd-8a3f-4d7f-89fb-5c76ec07b26a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "aa7a01dd-8a3f-4d7f-89fb-5c76ec07b26a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "d192de0b-3126-4663-ab7c-519b82fcd5c3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d192de0b-3126-4663-ab7c-519b82fcd5c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "7b7f33e5-b987-4678-b9f0-7e904c319986"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7b7f33e5-b987-4678-b9f0-7e904c319986",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:21.473 21:15:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:16:21.474 21:15:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:21.474 /home/vagrant/spdk_repo/spdk 00:16:21.474 21:15:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:16:21.474 21:15:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:16:21.474 21:15:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:16:21.474 00:16:21.474 real 0m12.470s 00:16:21.474 user 0m38.326s 00:16:21.474 sys 0m15.922s 00:16:21.474 21:15:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:21.474 21:15:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:21.474 ************************************ 00:16:21.474 END TEST bdev_fio 00:16:21.474 ************************************ 00:16:21.474 21:15:33 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:16:21.474 21:15:33 blockdev_xnvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:21.474 21:15:33 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:21.474 21:15:33 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:16:21.474 21:15:33 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:21.474 21:15:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:21.474 ************************************ 00:16:21.474 START TEST bdev_verify 00:16:21.474 ************************************ 00:16:21.474 21:15:33 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:21.732 [2024-07-14 21:15:33.113773] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:21.732 [2024-07-14 21:15:33.114766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76618 ] 00:16:21.990 [2024-07-14 21:15:33.284030] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:21.990 [2024-07-14 21:15:33.464449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.990 [2024-07-14 21:15:33.464465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.557 Running I/O for 5 seconds... 00:16:27.816 00:16:27.816 Latency(us) 00:16:27.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.816 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:27.816 Verification LBA range: start 0x0 length 0xa0000 00:16:27.816 nvme0n1 : 5.06 1569.81 6.13 0.00 0.00 81388.87 15490.33 66250.94 00:16:27.816 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:27.816 Verification LBA range: start 0xa0000 length 0xa0000 00:16:27.816 nvme0n1 : 5.02 1657.25 6.47 0.00 0.00 77091.27 5600.35 74353.57 00:16:27.816 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:27.816 Verification LBA range: start 0x0 length 0xbd0bd 00:16:27.816 nvme1n1 : 5.06 2550.02 9.96 0.00 0.00 49800.85 5630.14 119156.36 00:16:27.816 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:27.816 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:27.816 nvme1n1 : 5.05 2432.93 9.50 0.00 0.00 52232.23 3247.01 149660.39 00:16:27.816 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:27.816 Verification LBA range: start 0x0 length 0x80000 00:16:27.817 nvme2n1 : 5.07 1539.32 6.01 0.00 0.00 82671.55 12928.47 66727.56 00:16:27.817 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:27.817 Verification LBA range: start 0x80000 length 0x80000 00:16:27.817 nvme2n1 : 5.05 1649.12 6.44 0.00 0.00 76994.80 5153.51 80073.08 00:16:27.817 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:27.817 Verification LBA range: start 0x0 length 0x80000 00:16:27.817 nvme2n2 : 5.06 1541.87 6.02 0.00 0.00 82412.13 10902.81 71970.44 00:16:27.817 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:27.817 Verification LBA range: start 0x80000 length 0x80000 00:16:27.817 nvme2n2 : 5.06 1645.72 6.43 0.00 0.00 76966.23 4766.25 86269.21 00:16:27.817 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:27.817 Verification LBA range: start 0x0 length 0x80000 00:16:27.817 nvme2n3 : 5.07 1540.79 6.02 0.00 0.00 82303.10 13405.09 69587.32 00:16:27.817 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:27.817 Verification LBA range: start 0x80000 length 0x80000 00:16:27.817 nvme2n3 : 5.07 1667.08 6.51 0.00 0.00 75837.62 5868.45 67680.81 00:16:27.817 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:27.817 Verification LBA range: start 0x0 length 0x20000 00:16:27.817 nvme3n1 : 5.07 1563.98 6.11 0.00 0.00 80925.99 10545.34 78166.57 00:16:27.817 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:27.817 Verification LBA range: start 0x20000 length 0x20000 00:16:27.817 nvme3n1 : 5.06 1642.94 6.42 0.00 0.00 76845.25 4468.36 74353.57 00:16:27.817 =================================================================================================================== 00:16:27.817 Total : 21000.83 82.03 0.00 0.00 72553.66 3247.01 149660.39 00:16:28.751 ************************************ 00:16:28.751 END TEST bdev_verify 00:16:28.751 ************************************ 00:16:28.751 00:16:28.751 real 0m7.152s 00:16:28.751 user 0m11.248s 00:16:28.751 sys 0m1.690s 00:16:28.751 21:15:40 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:28.751 21:15:40 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:28.751 21:15:40 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:16:28.751 21:15:40 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:28.751 21:15:40 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:16:28.751 21:15:40 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:28.751 21:15:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:28.751 ************************************ 00:16:28.751 START TEST bdev_verify_big_io 00:16:28.751 ************************************ 00:16:28.751 21:15:40 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:29.009 [2024-07-14 21:15:40.319906] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:29.009 [2024-07-14 21:15:40.320083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76717 ] 00:16:29.009 [2024-07-14 21:15:40.489302] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:29.267 [2024-07-14 21:15:40.659358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.267 [2024-07-14 21:15:40.659373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.832 Running I/O for 5 seconds... 00:16:36.393 00:16:36.393 Latency(us) 00:16:36.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.393 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:36.393 Verification LBA range: start 0x0 length 0xa000 00:16:36.393 nvme0n1 : 5.82 133.28 8.33 0.00 0.00 939176.33 72447.07 1021884.97 00:16:36.393 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:36.393 Verification LBA range: start 0xa000 length 0xa000 00:16:36.393 nvme0n1 : 6.00 109.31 6.83 0.00 0.00 1136423.30 23473.80 1441315.37 00:16:36.393 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:36.393 Verification LBA range: start 0x0 length 0xbd0b 00:16:36.393 nvme1n1 : 5.90 173.55 10.85 0.00 0.00 696703.65 51237.24 831234.79 00:16:36.393 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:36.393 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:36.393 nvme1n1 : 5.98 165.82 10.36 0.00 0.00 733039.26 20852.36 850299.81 00:16:36.393 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:36.393 Verification LBA range: start 0x0 length 0x8000 00:16:36.393 nvme2n1 : 5.90 97.56 6.10 0.00 0.00 1211917.86 77213.32 2120030.02 00:16:36.393 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:36.393 Verification LBA range: start 0x8000 length 0x8000 00:16:36.393 nvme2n1 : 5.99 114.95 7.18 0.00 0.00 1021742.61 67204.19 2379314.27 00:16:36.393 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:36.393 Verification LBA range: start 0x0 length 0x8000 00:16:36.393 nvme2n2 : 5.98 128.51 8.03 0.00 0.00 885324.41 25261.15 1548079.48 00:16:36.393 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:36.393 Verification LBA range: start 0x8000 length 0x8000 00:16:36.393 nvme2n2 : 5.96 115.37 7.21 0.00 0.00 990138.12 137268.13 1235413.18 00:16:36.393 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:36.393 Verification LBA range: start 0x0 length 0x8000 00:16:36.393 nvme2n3 : 5.91 94.80 5.93 0.00 0.00 1165025.81 72447.07 2745362.62 00:16:36.393 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:36.393 Verification LBA range: start 0x8000 length 0x8000 00:16:36.393 nvme2n3 : 5.99 125.58 7.85 0.00 0.00 886776.24 17635.14 2318306.21 00:16:36.393 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:36.393 Verification LBA range: start 0x0 length 0x2000 00:16:36.393 nvme3n1 : 5.99 137.52 8.60 0.00 0.00 782133.44 6404.65 2470826.36 00:16:36.393 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:36.393 Verification LBA range: start 0x2000 length 0x2000 00:16:36.393 nvme3n1 : 5.99 125.51 7.84 0.00 0.00 858316.32 13881.72 2455574.34 00:16:36.393 =================================================================================================================== 00:16:36.393 Total : 1521.78 95.11 0.00 0.00 915002.63 6404.65 2745362.62 00:16:37.328 00:16:37.328 real 0m8.372s 00:16:37.328 user 0m15.089s 00:16:37.328 sys 0m0.532s 00:16:37.328 21:15:48 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:37.328 21:15:48 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.328 ************************************ 00:16:37.328 END TEST bdev_verify_big_io 00:16:37.328 ************************************ 00:16:37.328 21:15:48 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:16:37.328 21:15:48 blockdev_xnvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:37.328 21:15:48 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:16:37.328 21:15:48 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:37.328 21:15:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:37.328 ************************************ 00:16:37.328 START TEST bdev_write_zeroes 00:16:37.328 ************************************ 00:16:37.328 21:15:48 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:37.328 [2024-07-14 21:15:48.728478] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:37.328 [2024-07-14 21:15:48.728675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76827 ] 00:16:37.586 [2024-07-14 21:15:48.889201] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.586 [2024-07-14 21:15:49.087796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.153 Running I/O for 1 seconds... 00:16:39.088 00:16:39.088 Latency(us) 00:16:39.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.088 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:39.088 nvme0n1 : 1.02 10914.45 42.63 0.00 0.00 11714.78 6821.70 20494.89 00:16:39.088 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:39.088 nvme1n1 : 1.02 17141.49 66.96 0.00 0.00 7441.95 4289.63 18111.77 00:16:39.088 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:39.088 nvme2n1 : 1.02 10895.52 42.56 0.00 0.00 11659.97 6553.60 20494.89 00:16:39.088 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:39.088 nvme2n2 : 1.02 10885.56 42.52 0.00 0.00 11663.95 6613.18 20614.05 00:16:39.088 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:39.088 nvme2n3 : 1.02 10874.09 42.48 0.00 0.00 11665.83 6702.55 21567.30 00:16:39.088 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:39.088 nvme3n1 : 1.02 10864.43 42.44 0.00 0.00 11669.49 6821.70 22401.40 00:16:39.088 =================================================================================================================== 00:16:39.088 Total : 71575.54 279.59 0.00 0.00 10662.40 4289.63 22401.40 00:16:40.461 00:16:40.461 real 0m2.931s 00:16:40.461 user 0m2.177s 00:16:40.461 sys 0m0.566s 00:16:40.461 21:15:51 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:40.461 21:15:51 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:40.461 ************************************ 00:16:40.461 END TEST bdev_write_zeroes 00:16:40.461 ************************************ 00:16:40.461 21:15:51 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:16:40.461 21:15:51 blockdev_xnvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:40.461 21:15:51 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:16:40.461 21:15:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:40.461 21:15:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:40.461 ************************************ 00:16:40.461 START TEST bdev_json_nonenclosed 00:16:40.461 ************************************ 00:16:40.461 21:15:51 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:40.461 [2024-07-14 21:15:51.729482] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:40.461 [2024-07-14 21:15:51.729666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76881 ] 00:16:40.461 [2024-07-14 21:15:51.900168] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.718 [2024-07-14 21:15:52.050525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.718 [2024-07-14 21:15:52.050637] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:40.718 [2024-07-14 21:15:52.050659] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:40.718 [2024-07-14 21:15:52.050674] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:40.976 00:16:40.976 real 0m0.794s 00:16:40.976 user 0m0.560s 00:16:40.976 sys 0m0.128s 00:16:40.976 21:15:52 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:16:40.976 21:15:52 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:40.976 21:15:52 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:40.976 ************************************ 00:16:40.976 END TEST bdev_json_nonenclosed 00:16:40.976 ************************************ 00:16:40.976 21:15:52 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:16:40.976 21:15:52 blockdev_xnvme -- bdev/blockdev.sh@782 -- # true 00:16:40.976 21:15:52 blockdev_xnvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:40.976 21:15:52 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:16:40.976 21:15:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:40.976 21:15:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:40.976 ************************************ 00:16:40.976 START TEST bdev_json_nonarray 00:16:40.976 ************************************ 00:16:40.976 21:15:52 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:41.235 [2024-07-14 21:15:52.573276] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:41.235 [2024-07-14 21:15:52.573468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76908 ] 00:16:41.236 [2024-07-14 21:15:52.739235] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.494 [2024-07-14 21:15:52.939657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.494 [2024-07-14 21:15:52.939780] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:41.494 [2024-07-14 21:15:52.939818] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:41.494 [2024-07-14 21:15:52.939836] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:42.059 00:16:42.059 real 0m0.854s 00:16:42.059 user 0m0.622s 00:16:42.059 sys 0m0.126s 00:16:42.059 21:15:53 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:16:42.059 21:15:53 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:42.059 21:15:53 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:42.059 ************************************ 00:16:42.059 END TEST bdev_json_nonarray 00:16:42.059 ************************************ 00:16:42.059 21:15:53 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:16:42.059 21:15:53 blockdev_xnvme -- bdev/blockdev.sh@785 -- # true 00:16:42.059 21:15:53 blockdev_xnvme -- bdev/blockdev.sh@787 -- # [[ xnvme == bdev ]] 00:16:42.059 21:15:53 blockdev_xnvme -- bdev/blockdev.sh@794 -- # [[ xnvme == gpt ]] 00:16:42.059 21:15:53 blockdev_xnvme -- bdev/blockdev.sh@798 -- # [[ xnvme == crypto_sw ]] 00:16:42.059 21:15:53 blockdev_xnvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:16:42.059 21:15:53 blockdev_xnvme -- bdev/blockdev.sh@811 -- # cleanup 00:16:42.059 21:15:53 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:42.059 21:15:53 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:42.059 21:15:53 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:42.059 21:15:53 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:42.059 21:15:53 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:42.059 21:15:53 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:42.059 21:15:53 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:42.623 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:44.030 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:44.030 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:44.030 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:44.030 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:44.030 00:16:44.030 real 1m0.834s 00:16:44.030 user 1m43.002s 00:16:44.030 sys 0m26.441s 00:16:44.030 21:15:55 blockdev_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:44.030 ************************************ 00:16:44.030 21:15:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:44.030 END TEST blockdev_xnvme 00:16:44.030 ************************************ 00:16:44.030 21:15:55 -- common/autotest_common.sh@1142 -- # return 0 00:16:44.030 21:15:55 -- spdk/autotest.sh@251 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:44.030 21:15:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:44.030 21:15:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:44.030 21:15:55 -- common/autotest_common.sh@10 -- # set +x 00:16:44.030 ************************************ 00:16:44.030 START TEST ublk 00:16:44.030 ************************************ 00:16:44.030 21:15:55 ublk -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:44.030 * Looking for test storage... 00:16:44.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:44.289 21:15:55 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:44.289 21:15:55 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:44.289 21:15:55 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:44.289 21:15:55 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:44.289 21:15:55 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:44.289 21:15:55 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:44.289 21:15:55 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:44.289 21:15:55 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:44.289 21:15:55 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:44.289 21:15:55 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:44.289 21:15:55 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:44.289 21:15:55 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:44.289 21:15:55 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:44.289 21:15:55 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:44.289 21:15:55 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:44.289 21:15:55 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:44.289 21:15:55 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:44.289 21:15:55 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:44.289 21:15:55 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:44.289 21:15:55 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:44.289 21:15:55 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:44.289 21:15:55 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:44.289 21:15:55 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.289 ************************************ 00:16:44.289 START TEST test_save_ublk_config 00:16:44.289 ************************************ 00:16:44.289 21:15:55 ublk.test_save_ublk_config -- common/autotest_common.sh@1123 -- # test_save_config 00:16:44.289 21:15:55 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:44.289 21:15:55 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=77198 00:16:44.289 21:15:55 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:44.289 21:15:55 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:44.289 21:15:55 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 77198 00:16:44.289 21:15:55 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 77198 ']' 00:16:44.289 21:15:55 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.289 21:15:55 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:44.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.289 21:15:55 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.289 21:15:55 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:44.289 21:15:55 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:44.289 [2024-07-14 21:15:55.727341] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:44.289 [2024-07-14 21:15:55.727504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77198 ] 00:16:44.548 [2024-07-14 21:15:55.899884] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.806 [2024-07-14 21:15:56.119824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.373 21:15:56 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.373 21:15:56 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:16:45.373 21:15:56 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:45.373 21:15:56 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:45.373 21:15:56 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.373 21:15:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:45.373 [2024-07-14 21:15:56.763937] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:45.373 [2024-07-14 21:15:56.765177] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:45.373 malloc0 00:16:45.373 [2024-07-14 21:15:56.833994] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:45.373 [2024-07-14 21:15:56.834107] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:45.373 [2024-07-14 21:15:56.834129] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:45.373 [2024-07-14 21:15:56.834141] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:45.373 [2024-07-14 21:15:56.841961] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:45.373 [2024-07-14 21:15:56.841996] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:45.373 [2024-07-14 21:15:56.848917] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:45.373 [2024-07-14 21:15:56.849062] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:45.373 [2024-07-14 21:15:56.865897] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:45.373 0 00:16:45.373 21:15:56 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.373 21:15:56 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:45.373 21:15:56 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.373 21:15:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:45.632 21:15:57 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.632 21:15:57 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:45.632 "subsystems": [ 00:16:45.632 { 00:16:45.632 "subsystem": "keyring", 00:16:45.632 "config": [] 00:16:45.632 }, 00:16:45.632 { 00:16:45.632 "subsystem": "iobuf", 00:16:45.632 "config": [ 00:16:45.632 { 00:16:45.632 "method": "iobuf_set_options", 00:16:45.632 "params": { 00:16:45.632 "small_pool_count": 8192, 00:16:45.632 "large_pool_count": 1024, 00:16:45.632 "small_bufsize": 8192, 00:16:45.632 "large_bufsize": 135168 00:16:45.632 } 00:16:45.632 } 00:16:45.632 ] 00:16:45.632 }, 00:16:45.632 { 00:16:45.632 "subsystem": "sock", 00:16:45.632 "config": [ 00:16:45.632 { 00:16:45.632 "method": "sock_set_default_impl", 00:16:45.632 "params": { 00:16:45.632 "impl_name": "posix" 00:16:45.632 } 00:16:45.632 }, 00:16:45.632 { 00:16:45.632 "method": "sock_impl_set_options", 00:16:45.632 "params": { 00:16:45.632 "impl_name": "ssl", 00:16:45.632 "recv_buf_size": 4096, 00:16:45.632 "send_buf_size": 4096, 00:16:45.632 "enable_recv_pipe": true, 00:16:45.632 "enable_quickack": false, 00:16:45.632 "enable_placement_id": 0, 00:16:45.632 "enable_zerocopy_send_server": true, 00:16:45.632 "enable_zerocopy_send_client": false, 00:16:45.632 "zerocopy_threshold": 0, 00:16:45.632 "tls_version": 0, 00:16:45.632 "enable_ktls": false 00:16:45.632 } 00:16:45.632 }, 00:16:45.632 { 00:16:45.632 "method": "sock_impl_set_options", 00:16:45.632 "params": { 00:16:45.632 "impl_name": "posix", 00:16:45.632 "recv_buf_size": 2097152, 00:16:45.632 "send_buf_size": 2097152, 00:16:45.632 "enable_recv_pipe": true, 00:16:45.632 "enable_quickack": false, 00:16:45.632 "enable_placement_id": 0, 00:16:45.632 "enable_zerocopy_send_server": true, 00:16:45.632 "enable_zerocopy_send_client": false, 00:16:45.632 "zerocopy_threshold": 0, 00:16:45.632 "tls_version": 0, 00:16:45.632 "enable_ktls": false 00:16:45.632 } 00:16:45.632 } 00:16:45.632 ] 00:16:45.632 }, 00:16:45.632 { 00:16:45.632 "subsystem": "vmd", 00:16:45.632 "config": [] 00:16:45.632 }, 00:16:45.632 { 00:16:45.632 "subsystem": "accel", 00:16:45.632 "config": [ 00:16:45.632 { 00:16:45.632 "method": "accel_set_options", 00:16:45.632 "params": { 00:16:45.632 "small_cache_size": 128, 00:16:45.632 "large_cache_size": 16, 00:16:45.632 "task_count": 2048, 00:16:45.632 "sequence_count": 2048, 00:16:45.632 "buf_count": 2048 00:16:45.632 } 00:16:45.632 } 00:16:45.632 ] 00:16:45.632 }, 00:16:45.632 { 00:16:45.632 "subsystem": "bdev", 00:16:45.632 "config": [ 00:16:45.632 { 00:16:45.632 "method": "bdev_set_options", 00:16:45.632 "params": { 00:16:45.632 "bdev_io_pool_size": 65535, 00:16:45.632 "bdev_io_cache_size": 256, 00:16:45.632 "bdev_auto_examine": true, 00:16:45.632 "iobuf_small_cache_size": 128, 00:16:45.632 "iobuf_large_cache_size": 16 00:16:45.632 } 00:16:45.632 }, 00:16:45.632 { 00:16:45.632 "method": "bdev_raid_set_options", 00:16:45.632 "params": { 00:16:45.632 "process_window_size_kb": 1024 00:16:45.632 } 00:16:45.632 }, 00:16:45.632 { 00:16:45.632 "method": "bdev_iscsi_set_options", 00:16:45.632 "params": { 00:16:45.632 "timeout_sec": 30 00:16:45.632 } 00:16:45.632 }, 00:16:45.632 { 00:16:45.632 "method": "bdev_nvme_set_options", 00:16:45.632 "params": { 00:16:45.632 "action_on_timeout": "none", 00:16:45.632 "timeout_us": 0, 00:16:45.632 "timeout_admin_us": 0, 00:16:45.632 "keep_alive_timeout_ms": 10000, 00:16:45.632 "arbitration_burst": 0, 00:16:45.632 "low_priority_weight": 0, 00:16:45.632 "medium_priority_weight": 0, 00:16:45.632 "high_priority_weight": 0, 00:16:45.632 "nvme_adminq_poll_period_us": 10000, 00:16:45.632 "nvme_ioq_poll_period_us": 0, 00:16:45.632 "io_queue_requests": 0, 00:16:45.632 "delay_cmd_submit": true, 00:16:45.632 "transport_retry_count": 4, 00:16:45.632 "bdev_retry_count": 3, 00:16:45.632 "transport_ack_timeout": 0, 00:16:45.632 "ctrlr_loss_timeout_sec": 0, 00:16:45.632 "reconnect_delay_sec": 0, 00:16:45.632 "fast_io_fail_timeout_sec": 0, 00:16:45.632 "disable_auto_failback": false, 00:16:45.632 "generate_uuids": false, 00:16:45.632 "transport_tos": 0, 00:16:45.632 "nvme_error_stat": false, 00:16:45.632 "rdma_srq_size": 0, 00:16:45.632 "io_path_stat": false, 00:16:45.632 "allow_accel_sequence": false, 00:16:45.632 "rdma_max_cq_size": 0, 00:16:45.632 "rdma_cm_event_timeout_ms": 0, 00:16:45.632 "dhchap_digests": [ 00:16:45.632 "sha256", 00:16:45.632 "sha384", 00:16:45.632 "sha512" 00:16:45.632 ], 00:16:45.632 "dhchap_dhgroups": [ 00:16:45.632 "null", 00:16:45.632 "ffdhe2048", 00:16:45.632 "ffdhe3072", 00:16:45.632 "ffdhe4096", 00:16:45.632 "ffdhe6144", 00:16:45.632 "ffdhe8192" 00:16:45.632 ] 00:16:45.632 } 00:16:45.632 }, 00:16:45.632 { 00:16:45.632 "method": "bdev_nvme_set_hotplug", 00:16:45.632 "params": { 00:16:45.632 "period_us": 100000, 00:16:45.632 "enable": false 00:16:45.632 } 00:16:45.632 }, 00:16:45.632 { 00:16:45.632 "method": "bdev_malloc_create", 00:16:45.632 "params": { 00:16:45.632 "name": "malloc0", 00:16:45.632 "num_blocks": 8192, 00:16:45.632 "block_size": 4096, 00:16:45.632 "physical_block_size": 4096, 00:16:45.632 "uuid": "69069e8a-db29-476b-8fc2-697f56e117b9", 00:16:45.632 "optimal_io_boundary": 0 00:16:45.632 } 00:16:45.632 }, 00:16:45.632 { 00:16:45.632 "method": "bdev_wait_for_examine" 00:16:45.632 } 00:16:45.632 ] 00:16:45.632 }, 00:16:45.632 { 00:16:45.632 "subsystem": "scsi", 00:16:45.632 "config": null 00:16:45.632 }, 00:16:45.632 { 00:16:45.632 "subsystem": "scheduler", 00:16:45.632 "config": [ 00:16:45.632 { 00:16:45.632 "method": "framework_set_scheduler", 00:16:45.632 "params": { 00:16:45.632 "name": "static" 00:16:45.632 } 00:16:45.632 } 00:16:45.632 ] 00:16:45.632 }, 00:16:45.633 { 00:16:45.633 "subsystem": "vhost_scsi", 00:16:45.633 "config": [] 00:16:45.633 }, 00:16:45.633 { 00:16:45.633 "subsystem": "vhost_blk", 00:16:45.633 "config": [] 00:16:45.633 }, 00:16:45.633 { 00:16:45.633 "subsystem": "ublk", 00:16:45.633 "config": [ 00:16:45.633 { 00:16:45.633 "method": "ublk_create_target", 00:16:45.633 "params": { 00:16:45.633 "cpumask": "1" 00:16:45.633 } 00:16:45.633 }, 00:16:45.633 { 00:16:45.633 "method": "ublk_start_disk", 00:16:45.633 "params": { 00:16:45.633 "bdev_name": "malloc0", 00:16:45.633 "ublk_id": 0, 00:16:45.633 "num_queues": 1, 00:16:45.633 "queue_depth": 128 00:16:45.633 } 00:16:45.633 } 00:16:45.633 ] 00:16:45.633 }, 00:16:45.633 { 00:16:45.633 "subsystem": "nbd", 00:16:45.633 "config": [] 00:16:45.633 }, 00:16:45.633 { 00:16:45.633 "subsystem": "nvmf", 00:16:45.633 "config": [ 00:16:45.633 { 00:16:45.633 "method": "nvmf_set_config", 00:16:45.633 "params": { 00:16:45.633 "discovery_filter": "match_any", 00:16:45.633 "admin_cmd_passthru": { 00:16:45.633 "identify_ctrlr": false 00:16:45.633 } 00:16:45.633 } 00:16:45.633 }, 00:16:45.633 { 00:16:45.633 "method": "nvmf_set_max_subsystems", 00:16:45.633 "params": { 00:16:45.633 "max_subsystems": 1024 00:16:45.633 } 00:16:45.633 }, 00:16:45.633 { 00:16:45.633 "method": "nvmf_set_crdt", 00:16:45.633 "params": { 00:16:45.633 "crdt1": 0, 00:16:45.633 "crdt2": 0, 00:16:45.633 "crdt3": 0 00:16:45.633 } 00:16:45.633 } 00:16:45.633 ] 00:16:45.633 }, 00:16:45.633 { 00:16:45.633 "subsystem": "iscsi", 00:16:45.633 "config": [ 00:16:45.633 { 00:16:45.633 "method": "iscsi_set_options", 00:16:45.633 "params": { 00:16:45.633 "node_base": "iqn.2016-06.io.spdk", 00:16:45.633 "max_sessions": 128, 00:16:45.633 "max_connections_per_session": 2, 00:16:45.633 "max_queue_depth": 64, 00:16:45.633 "default_time2wait": 2, 00:16:45.633 "default_time2retain": 20, 00:16:45.633 "first_burst_length": 8192, 00:16:45.633 "immediate_data": true, 00:16:45.633 "allow_duplicated_isid": false, 00:16:45.633 "error_recovery_level": 0, 00:16:45.633 "nop_timeout": 60, 00:16:45.633 "nop_in_interval": 30, 00:16:45.633 "disable_chap": false, 00:16:45.633 "require_chap": false, 00:16:45.633 "mutual_chap": false, 00:16:45.633 "chap_group": 0, 00:16:45.633 "max_large_datain_per_connection": 64, 00:16:45.633 "max_r2t_per_connection": 4, 00:16:45.633 "pdu_pool_size": 36864, 00:16:45.633 "immediate_data_pool_size": 16384, 00:16:45.633 "data_out_pool_size": 2048 00:16:45.633 } 00:16:45.633 } 00:16:45.633 ] 00:16:45.633 } 00:16:45.633 ] 00:16:45.633 }' 00:16:45.633 21:15:57 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 77198 00:16:45.633 21:15:57 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 77198 ']' 00:16:45.633 21:15:57 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 77198 00:16:45.633 21:15:57 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:16:45.633 21:15:57 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:45.633 21:15:57 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77198 00:16:45.633 21:15:57 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:45.633 21:15:57 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:45.633 killing process with pid 77198 00:16:45.633 21:15:57 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77198' 00:16:45.633 21:15:57 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 77198 00:16:45.633 21:15:57 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 77198 00:16:47.009 [2024-07-14 21:15:58.305206] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:47.009 [2024-07-14 21:15:58.339961] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:47.009 [2024-07-14 21:15:58.340171] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:47.009 [2024-07-14 21:15:58.347913] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:47.009 [2024-07-14 21:15:58.347993] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:47.009 [2024-07-14 21:15:58.348006] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:47.009 [2024-07-14 21:15:58.348037] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:47.009 [2024-07-14 21:15:58.348214] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:47.945 21:15:59 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=77253 00:16:47.945 21:15:59 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 77253 00:16:47.945 21:15:59 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 77253 ']' 00:16:47.945 21:15:59 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.945 21:15:59 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:47.945 21:15:59 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:47.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.945 21:15:59 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.945 21:15:59 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:47.945 21:15:59 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:47.945 21:15:59 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:47.945 "subsystems": [ 00:16:47.945 { 00:16:47.945 "subsystem": "keyring", 00:16:47.945 "config": [] 00:16:47.945 }, 00:16:47.945 { 00:16:47.945 "subsystem": "iobuf", 00:16:47.945 "config": [ 00:16:47.945 { 00:16:47.945 "method": "iobuf_set_options", 00:16:47.945 "params": { 00:16:47.945 "small_pool_count": 8192, 00:16:47.945 "large_pool_count": 1024, 00:16:47.945 "small_bufsize": 8192, 00:16:47.945 "large_bufsize": 135168 00:16:47.945 } 00:16:47.945 } 00:16:47.945 ] 00:16:47.945 }, 00:16:47.945 { 00:16:47.945 "subsystem": "sock", 00:16:47.945 "config": [ 00:16:47.945 { 00:16:47.945 "method": "sock_set_default_impl", 00:16:47.945 "params": { 00:16:47.945 "impl_name": "posix" 00:16:47.945 } 00:16:47.945 }, 00:16:47.945 { 00:16:47.945 "method": "sock_impl_set_options", 00:16:47.945 "params": { 00:16:47.945 "impl_name": "ssl", 00:16:47.945 "recv_buf_size": 4096, 00:16:47.945 "send_buf_size": 4096, 00:16:47.945 "enable_recv_pipe": true, 00:16:47.945 "enable_quickack": false, 00:16:47.945 "enable_placement_id": 0, 00:16:47.945 "enable_zerocopy_send_server": true, 00:16:47.945 "enable_zerocopy_send_client": false, 00:16:47.945 "zerocopy_threshold": 0, 00:16:47.945 "tls_version": 0, 00:16:47.945 "enable_ktls": false 00:16:47.945 } 00:16:47.945 }, 00:16:47.945 { 00:16:47.945 "method": "sock_impl_set_options", 00:16:47.945 "params": { 00:16:47.945 "impl_name": "posix", 00:16:47.945 "recv_buf_size": 2097152, 00:16:47.945 "send_buf_size": 2097152, 00:16:47.945 "enable_recv_pipe": true, 00:16:47.945 "enable_quickack": false, 00:16:47.945 "enable_placement_id": 0, 00:16:47.945 "enable_zerocopy_send_server": true, 00:16:47.945 "enable_zerocopy_send_client": false, 00:16:47.945 "zerocopy_threshold": 0, 00:16:47.945 "tls_version": 0, 00:16:47.945 "enable_ktls": false 00:16:47.945 } 00:16:47.945 } 00:16:47.945 ] 00:16:47.945 }, 00:16:47.945 { 00:16:47.945 "subsystem": "vmd", 00:16:47.945 "config": [] 00:16:47.945 }, 00:16:47.945 { 00:16:47.945 "subsystem": "accel", 00:16:47.945 "config": [ 00:16:47.945 { 00:16:47.945 "method": "accel_set_options", 00:16:47.945 "params": { 00:16:47.945 "small_cache_size": 128, 00:16:47.945 "large_cache_size": 16, 00:16:47.945 "task_count": 2048, 00:16:47.945 "sequence_count": 2048, 00:16:47.945 "buf_count": 2048 00:16:47.945 } 00:16:47.945 } 00:16:47.945 ] 00:16:47.945 }, 00:16:47.945 { 00:16:47.945 "subsystem": "bdev", 00:16:47.945 "config": [ 00:16:47.945 { 00:16:47.945 "method": "bdev_set_options", 00:16:47.945 "params": { 00:16:47.945 "bdev_io_pool_size": 65535, 00:16:47.945 "bdev_io_cache_size": 256, 00:16:47.945 "bdev_auto_examine": true, 00:16:47.945 "iobuf_small_cache_size": 128, 00:16:47.945 "iobuf_large_cache_size": 16 00:16:47.945 } 00:16:47.945 }, 00:16:47.945 { 00:16:47.945 "method": "bdev_raid_set_options", 00:16:47.945 "params": { 00:16:47.945 "process_window_size_kb": 1024 00:16:47.945 } 00:16:47.945 }, 00:16:47.945 { 00:16:47.945 "method": "bdev_iscsi_set_options", 00:16:47.945 "params": { 00:16:47.945 "timeout_sec": 30 00:16:47.945 } 00:16:47.945 }, 00:16:47.945 { 00:16:47.945 "method": "bdev_nvme_set_options", 00:16:47.945 "params": { 00:16:47.945 "action_on_timeout": "none", 00:16:47.945 "timeout_us": 0, 00:16:47.945 "timeout_admin_us": 0, 00:16:47.945 "keep_alive_timeout_ms": 10000, 00:16:47.945 "arbitration_burst": 0, 00:16:47.945 "low_priority_weight": 0, 00:16:47.945 "medium_priority_weight": 0, 00:16:47.945 "high_priority_weight": 0, 00:16:47.945 "nvme_adminq_poll_period_us": 10000, 00:16:47.945 "nvme_ioq_poll_period_us": 0, 00:16:47.945 "io_queue_requests": 0, 00:16:47.945 "delay_cmd_submit": true, 00:16:47.945 "transport_retry_count": 4, 00:16:47.945 "bdev_retry_count": 3, 00:16:47.945 "transport_ack_timeout": 0, 00:16:47.945 "ctrlr_loss_timeout_sec": 0, 00:16:47.946 "reconnect_delay_sec": 0, 00:16:47.946 "fast_io_fail_timeout_sec": 0, 00:16:47.946 "disable_auto_failback": false, 00:16:47.946 "generate_uuids": false, 00:16:47.946 "transport_tos": 0, 00:16:47.946 "nvme_error_stat": false, 00:16:47.946 "rdma_srq_size": 0, 00:16:47.946 "io_path_stat": false, 00:16:47.946 "allow_accel_sequence": false, 00:16:47.946 "rdma_max_cq_size": 0, 00:16:47.946 "rdma_cm_event_timeout_ms": 0, 00:16:47.946 "dhchap_digests": [ 00:16:47.946 "sha256", 00:16:47.946 "sha384", 00:16:47.946 "sha512" 00:16:47.946 ], 00:16:47.946 "dhchap_dhgroups": [ 00:16:47.946 "null", 00:16:47.946 "ffdhe2048", 00:16:47.946 "ffdhe3072", 00:16:47.946 "ffdhe4096", 00:16:47.946 "ffdhe6144", 00:16:47.946 "ffdhe8192" 00:16:47.946 ] 00:16:47.946 } 00:16:47.946 }, 00:16:47.946 { 00:16:47.946 "method": "bdev_nvme_set_hotplug", 00:16:47.946 "params": { 00:16:47.946 "period_us": 100000, 00:16:47.946 "enable": false 00:16:47.946 } 00:16:47.946 }, 00:16:47.946 { 00:16:47.946 "method": "bdev_malloc_create", 00:16:47.946 "params": { 00:16:47.946 "name": "malloc0", 00:16:47.946 "num_blocks": 8192, 00:16:47.946 "block_size": 4096, 00:16:47.946 "physical_block_size": 4096, 00:16:47.946 "uuid": "69069e8a-db29-476b-8fc2-697f56e117b9", 00:16:47.946 "optimal_io_boundary": 0 00:16:47.946 } 00:16:47.946 }, 00:16:47.946 { 00:16:47.946 "method": "bdev_wait_for_examine" 00:16:47.946 } 00:16:47.946 ] 00:16:47.946 }, 00:16:47.946 { 00:16:47.946 "subsystem": "scsi", 00:16:47.946 "config": null 00:16:47.946 }, 00:16:47.946 { 00:16:47.946 "subsystem": "scheduler", 00:16:47.946 "config": [ 00:16:47.946 { 00:16:47.946 "method": "framework_set_scheduler", 00:16:47.946 "params": { 00:16:47.946 "name": "static" 00:16:47.946 } 00:16:47.946 } 00:16:47.946 ] 00:16:47.946 }, 00:16:47.946 { 00:16:47.946 "subsystem": "vhost_scsi", 00:16:47.946 "config": [] 00:16:47.946 }, 00:16:47.946 { 00:16:47.946 "subsystem": "vhost_blk", 00:16:47.946 "config": [] 00:16:47.946 }, 00:16:47.946 { 00:16:47.946 "subsystem": "ublk", 00:16:47.946 "config": [ 00:16:47.946 { 00:16:47.946 "method": "ublk_create_target", 00:16:47.946 "params": { 00:16:47.946 "cpumask": "1" 00:16:47.946 } 00:16:47.946 }, 00:16:47.946 { 00:16:47.946 "method": "ublk_start_disk", 00:16:47.946 "params": { 00:16:47.946 "bdev_name": "malloc0", 00:16:47.946 "ublk_id": 0, 00:16:47.946 "num_queues": 1, 00:16:47.946 "queue_depth": 128 00:16:47.946 } 00:16:47.946 } 00:16:47.946 ] 00:16:47.946 }, 00:16:47.946 { 00:16:47.946 "subsystem": "nbd", 00:16:47.946 "config": [] 00:16:47.946 }, 00:16:47.946 { 00:16:47.946 "subsystem": "nvmf", 00:16:47.946 "config": [ 00:16:47.946 { 00:16:47.946 "method": "nvmf_set_config", 00:16:47.946 "params": { 00:16:47.946 "discovery_filter": "match_any", 00:16:47.946 "admin_cmd_passthru": { 00:16:47.946 "identify_ctrlr": false 00:16:47.946 } 00:16:47.946 } 00:16:47.946 }, 00:16:47.946 { 00:16:47.946 "method": "nvmf_set_max_subsystems", 00:16:47.946 "params": { 00:16:47.946 "max_subsystems": 1024 00:16:47.946 } 00:16:47.946 }, 00:16:47.946 { 00:16:47.946 "method": "nvmf_set_crdt", 00:16:47.946 "params": { 00:16:47.946 "crdt1": 0, 00:16:47.946 "crdt2": 0, 00:16:47.946 "crdt3": 0 00:16:47.946 } 00:16:47.946 } 00:16:47.946 ] 00:16:47.946 }, 00:16:47.946 { 00:16:47.946 "subsystem": "iscsi", 00:16:47.946 "config": [ 00:16:47.946 { 00:16:47.946 "method": "iscsi_set_options", 00:16:47.946 "params": { 00:16:47.946 "node_base": "iqn.2016-06.io.spdk", 00:16:47.946 "max_sessions": 128, 00:16:47.946 "max_connections_per_session": 2, 00:16:47.946 "max_queue_depth": 64, 00:16:47.946 "default_time2wait": 2, 00:16:47.946 "default_time2retain": 20, 00:16:47.946 "first_burst_length": 8192, 00:16:47.946 "immediate_data": true, 00:16:47.946 "allow_duplicated_isid": false, 00:16:47.946 "error_recovery_level": 0, 00:16:47.946 "nop_timeout": 60, 00:16:47.946 "nop_in_interval": 30, 00:16:47.946 "disable_chap": false, 00:16:47.946 "require_chap": false, 00:16:47.946 "mutual_chap": false, 00:16:47.946 "chap_group": 0, 00:16:47.946 "max_large_datain_per_connection": 64, 00:16:47.946 "max_r2t_per_connection": 4, 00:16:47.946 "pdu_pool_size": 36864, 00:16:47.946 "immediate_data_pool_size": 16384, 00:16:47.946 "data_out_pool_size": 2048 00:16:47.946 } 00:16:47.946 } 00:16:47.946 ] 00:16:47.946 } 00:16:47.946 ] 00:16:47.946 }' 00:16:48.205 [2024-07-14 21:15:59.573501] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:48.205 [2024-07-14 21:15:59.573651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77253 ] 00:16:48.205 [2024-07-14 21:15:59.737022] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.464 [2024-07-14 21:15:59.917374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.401 [2024-07-14 21:16:00.676907] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:49.401 [2024-07-14 21:16:00.677882] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:49.401 [2024-07-14 21:16:00.685022] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:49.401 [2024-07-14 21:16:00.685107] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:49.401 [2024-07-14 21:16:00.685123] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:49.401 [2024-07-14 21:16:00.685132] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:49.401 [2024-07-14 21:16:00.693980] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:49.401 [2024-07-14 21:16:00.694005] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:49.401 [2024-07-14 21:16:00.700931] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:49.401 [2024-07-14 21:16:00.701028] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:49.401 [2024-07-14 21:16:00.717905] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:49.401 21:16:00 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:49.401 21:16:00 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:16:49.401 21:16:00 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:49.401 21:16:00 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:49.401 21:16:00 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.401 21:16:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:49.401 21:16:00 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.401 21:16:00 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:49.401 21:16:00 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:49.401 21:16:00 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 77253 00:16:49.401 21:16:00 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 77253 ']' 00:16:49.401 21:16:00 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 77253 00:16:49.401 21:16:00 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:16:49.401 21:16:00 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:49.401 21:16:00 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77253 00:16:49.401 21:16:00 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:49.401 21:16:00 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:49.401 killing process with pid 77253 00:16:49.401 21:16:00 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77253' 00:16:49.401 21:16:00 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 77253 00:16:49.401 21:16:00 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 77253 00:16:50.774 [2024-07-14 21:16:02.092594] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:50.774 [2024-07-14 21:16:02.126947] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:50.774 [2024-07-14 21:16:02.127145] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:50.774 [2024-07-14 21:16:02.134903] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:50.774 [2024-07-14 21:16:02.134973] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:50.774 [2024-07-14 21:16:02.134987] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:50.774 [2024-07-14 21:16:02.135030] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:50.774 [2024-07-14 21:16:02.139084] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:51.708 21:16:03 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:51.708 00:16:51.708 real 0m7.616s 00:16:51.708 user 0m6.566s 00:16:51.708 sys 0m1.898s 00:16:51.708 21:16:03 ublk.test_save_ublk_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:51.708 21:16:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:51.708 ************************************ 00:16:51.708 END TEST test_save_ublk_config 00:16:51.708 ************************************ 00:16:51.967 21:16:03 ublk -- common/autotest_common.sh@1142 -- # return 0 00:16:51.967 21:16:03 ublk -- ublk/ublk.sh@139 -- # spdk_pid=77326 00:16:51.967 21:16:03 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:51.967 21:16:03 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:51.967 21:16:03 ublk -- ublk/ublk.sh@141 -- # waitforlisten 77326 00:16:51.967 21:16:03 ublk -- common/autotest_common.sh@829 -- # '[' -z 77326 ']' 00:16:51.967 21:16:03 ublk -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.967 21:16:03 ublk -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:51.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.967 21:16:03 ublk -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.967 21:16:03 ublk -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:51.967 21:16:03 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.967 [2024-07-14 21:16:03.359893] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:51.967 [2024-07-14 21:16:03.360040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77326 ] 00:16:52.225 [2024-07-14 21:16:03.521749] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:52.225 [2024-07-14 21:16:03.689286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.225 [2024-07-14 21:16:03.689299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.792 21:16:04 ublk -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:52.792 21:16:04 ublk -- common/autotest_common.sh@862 -- # return 0 00:16:52.792 21:16:04 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:52.792 21:16:04 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:52.792 21:16:04 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:52.792 21:16:04 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:52.792 ************************************ 00:16:52.792 START TEST test_create_ublk 00:16:52.792 ************************************ 00:16:52.792 21:16:04 ublk.test_create_ublk -- common/autotest_common.sh@1123 -- # test_create_ublk 00:16:52.792 21:16:04 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:52.792 21:16:04 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.792 21:16:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:52.792 [2024-07-14 21:16:04.325896] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:52.792 [2024-07-14 21:16:04.328441] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:52.792 21:16:04 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.792 21:16:04 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:52.792 21:16:04 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:52.792 21:16:04 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.792 21:16:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.050 21:16:04 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.050 21:16:04 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:53.050 21:16:04 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:53.050 21:16:04 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.050 21:16:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.050 [2024-07-14 21:16:04.550406] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:53.050 [2024-07-14 21:16:04.550999] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:53.050 [2024-07-14 21:16:04.551035] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:53.050 [2024-07-14 21:16:04.551050] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:53.050 [2024-07-14 21:16:04.557936] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:53.050 [2024-07-14 21:16:04.557977] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:53.050 [2024-07-14 21:16:04.565887] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:53.050 [2024-07-14 21:16:04.578080] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:53.307 [2024-07-14 21:16:04.599899] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:53.307 21:16:04 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.307 21:16:04 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:53.307 21:16:04 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:53.307 21:16:04 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:53.307 21:16:04 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.307 21:16:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.307 21:16:04 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.307 21:16:04 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:53.307 { 00:16:53.307 "ublk_device": "/dev/ublkb0", 00:16:53.307 "id": 0, 00:16:53.307 "queue_depth": 512, 00:16:53.307 "num_queues": 4, 00:16:53.307 "bdev_name": "Malloc0" 00:16:53.307 } 00:16:53.307 ]' 00:16:53.307 21:16:04 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:53.307 21:16:04 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:53.307 21:16:04 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:53.307 21:16:04 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:53.307 21:16:04 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:53.307 21:16:04 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:53.307 21:16:04 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:53.307 21:16:04 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:53.307 21:16:04 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:53.564 21:16:04 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:53.564 21:16:04 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:53.564 21:16:04 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:53.564 21:16:04 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:53.564 21:16:04 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:53.564 21:16:04 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:53.564 21:16:04 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:53.564 21:16:04 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:53.564 21:16:04 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:53.564 21:16:04 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:53.564 21:16:04 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:53.564 21:16:04 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:53.564 21:16:04 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:53.564 fio: verification read phase will never start because write phase uses all of runtime 00:16:53.564 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:53.564 fio-3.35 00:16:53.564 Starting 1 process 00:17:05.766 00:17:05.766 fio_test: (groupid=0, jobs=1): err= 0: pid=77377: Sun Jul 14 21:16:15 2024 00:17:05.766 write: IOPS=12.0k, BW=47.1MiB/s (49.3MB/s)(471MiB/10001msec); 0 zone resets 00:17:05.766 clat (usec): min=45, max=3998, avg=81.84, stdev=117.57 00:17:05.766 lat (usec): min=46, max=3998, avg=82.45, stdev=117.58 00:17:05.766 clat percentiles (usec): 00:17:05.766 | 1.00th=[ 52], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 71], 00:17:05.766 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 74], 00:17:05.766 | 70.00th=[ 76], 80.00th=[ 82], 90.00th=[ 91], 95.00th=[ 101], 00:17:05.766 | 99.00th=[ 122], 99.50th=[ 137], 99.90th=[ 2507], 99.95th=[ 3064], 00:17:05.766 | 99.99th=[ 3490] 00:17:05.766 bw ( KiB/s): min=46728, max=54672, per=100.00%, avg=48250.53, stdev=1773.17, samples=19 00:17:05.766 iops : min=11682, max=13668, avg=12062.53, stdev=443.26, samples=19 00:17:05.766 lat (usec) : 50=0.25%, 100=94.49%, 250=4.94%, 500=0.02%, 750=0.02% 00:17:05.766 lat (usec) : 1000=0.02% 00:17:05.766 lat (msec) : 2=0.11%, 4=0.15% 00:17:05.766 cpu : usr=2.32%, sys=6.40%, ctx=120500, majf=0, minf=795 00:17:05.766 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.766 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.766 issued rwts: total=0,120493,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.766 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:05.766 00:17:05.766 Run status group 0 (all jobs): 00:17:05.766 WRITE: bw=47.1MiB/s (49.3MB/s), 47.1MiB/s-47.1MiB/s (49.3MB/s-49.3MB/s), io=471MiB (494MB), run=10001-10001msec 00:17:05.766 00:17:05.766 Disk stats (read/write): 00:17:05.766 ublkb0: ios=0/119247, merge=0/0, ticks=0/9097, in_queue=9097, util=99.11% 00:17:05.766 21:16:15 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:17:05.766 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.766 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.766 [2024-07-14 21:16:15.110457] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:05.766 [2024-07-14 21:16:15.149337] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:05.766 [2024-07-14 21:16:15.150755] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:05.766 [2024-07-14 21:16:15.156926] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:05.766 [2024-07-14 21:16:15.157316] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:05.766 [2024-07-14 21:16:15.157340] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.767 21:16:15 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@648 -- # local es=0 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # rpc_cmd ublk_stop_disk 0 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.767 [2024-07-14 21:16:15.179985] ublk.c:1071:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:17:05.767 request: 00:17:05.767 { 00:17:05.767 "ublk_id": 0, 00:17:05.767 "method": "ublk_stop_disk", 00:17:05.767 "req_id": 1 00:17:05.767 } 00:17:05.767 Got JSON-RPC error response 00:17:05.767 response: 00:17:05.767 { 00:17:05.767 "code": -19, 00:17:05.767 "message": "No such device" 00:17:05.767 } 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # es=1 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:05.767 21:16:15 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.767 [2024-07-14 21:16:15.197921] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:17:05.767 [2024-07-14 21:16:15.204862] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:17:05.767 [2024-07-14 21:16:15.204915] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.767 21:16:15 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.767 21:16:15 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:17:05.767 21:16:15 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.767 21:16:15 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:05.767 21:16:15 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:17:05.767 21:16:15 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:05.767 21:16:15 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.767 21:16:15 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:05.767 21:16:15 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:17:05.767 21:16:15 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:05.767 00:17:05.767 real 0m11.289s 00:17:05.767 user 0m0.660s 00:17:05.767 sys 0m0.722s 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:05.767 21:16:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.767 ************************************ 00:17:05.767 END TEST test_create_ublk 00:17:05.767 ************************************ 00:17:05.767 21:16:15 ublk -- common/autotest_common.sh@1142 -- # return 0 00:17:05.767 21:16:15 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:17:05.767 21:16:15 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:05.767 21:16:15 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:05.767 21:16:15 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.767 ************************************ 00:17:05.767 START TEST test_create_multi_ublk 00:17:05.767 ************************************ 00:17:05.767 21:16:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@1123 -- # test_create_multi_ublk 00:17:05.767 21:16:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:17:05.767 21:16:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.767 21:16:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.767 [2024-07-14 21:16:15.669930] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:05.767 [2024-07-14 21:16:15.672242] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:05.767 21:16:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.767 21:16:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:17:05.767 21:16:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:17:05.767 21:16:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:05.767 21:16:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:17:05.767 21:16:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.767 21:16:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.767 21:16:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.767 21:16:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:17:05.767 21:16:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:05.767 21:16:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.767 21:16:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.767 [2024-07-14 21:16:15.889044] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:05.767 [2024-07-14 21:16:15.889504] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:05.767 [2024-07-14 21:16:15.889521] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:05.767 [2024-07-14 21:16:15.889530] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:05.767 [2024-07-14 21:16:15.896238] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:05.767 [2024-07-14 21:16:15.896262] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:05.767 [2024-07-14 21:16:15.903926] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:05.767 [2024-07-14 21:16:15.904687] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:05.767 [2024-07-14 21:16:15.926927] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:05.767 21:16:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.767 21:16:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:17:05.768 21:16:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:05.768 21:16:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:17:05.768 21:16:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.768 21:16:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.768 [2024-07-14 21:16:16.159036] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:17:05.768 [2024-07-14 21:16:16.159522] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:17:05.768 [2024-07-14 21:16:16.159537] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:05.768 [2024-07-14 21:16:16.159549] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:05.768 [2024-07-14 21:16:16.165852] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:05.768 [2024-07-14 21:16:16.165882] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:05.768 [2024-07-14 21:16:16.172905] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:05.768 [2024-07-14 21:16:16.173622] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:05.768 [2024-07-14 21:16:16.189871] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.768 [2024-07-14 21:16:16.425066] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:17:05.768 [2024-07-14 21:16:16.425482] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:17:05.768 [2024-07-14 21:16:16.425501] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:17:05.768 [2024-07-14 21:16:16.425510] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:17:05.768 [2024-07-14 21:16:16.432931] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:05.768 [2024-07-14 21:16:16.432960] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:05.768 [2024-07-14 21:16:16.440915] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:05.768 [2024-07-14 21:16:16.441780] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:17:05.768 [2024-07-14 21:16:16.449961] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.768 [2024-07-14 21:16:16.671954] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:17:05.768 [2024-07-14 21:16:16.672460] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:17:05.768 [2024-07-14 21:16:16.672484] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:17:05.768 [2024-07-14 21:16:16.672497] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:17:05.768 [2024-07-14 21:16:16.679945] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:05.768 [2024-07-14 21:16:16.680010] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:05.768 [2024-07-14 21:16:16.687876] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:05.768 [2024-07-14 21:16:16.688661] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:17:05.768 [2024-07-14 21:16:16.696929] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:17:05.768 { 00:17:05.768 "ublk_device": "/dev/ublkb0", 00:17:05.768 "id": 0, 00:17:05.768 "queue_depth": 512, 00:17:05.768 "num_queues": 4, 00:17:05.768 "bdev_name": "Malloc0" 00:17:05.768 }, 00:17:05.768 { 00:17:05.768 "ublk_device": "/dev/ublkb1", 00:17:05.768 "id": 1, 00:17:05.768 "queue_depth": 512, 00:17:05.768 "num_queues": 4, 00:17:05.768 "bdev_name": "Malloc1" 00:17:05.768 }, 00:17:05.768 { 00:17:05.768 "ublk_device": "/dev/ublkb2", 00:17:05.768 "id": 2, 00:17:05.768 "queue_depth": 512, 00:17:05.768 "num_queues": 4, 00:17:05.768 "bdev_name": "Malloc2" 00:17:05.768 }, 00:17:05.768 { 00:17:05.768 "ublk_device": "/dev/ublkb3", 00:17:05.768 "id": 3, 00:17:05.768 "queue_depth": 512, 00:17:05.768 "num_queues": 4, 00:17:05.768 "bdev_name": "Malloc3" 00:17:05.768 } 00:17:05.768 ]' 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:17:05.768 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:17:05.769 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:05.769 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:17:05.769 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:05.769 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:17:05.769 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:05.769 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:05.769 21:16:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:17:05.769 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:17:05.769 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:17:05.769 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:17:05.769 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:17:05.769 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:05.769 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:17:05.769 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:05.769 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:17:05.769 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:17:05.769 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:05.769 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:17:06.027 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:17:06.027 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:17:06.027 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:17:06.027 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:17:06.027 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:06.027 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:17:06.027 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:06.027 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:17:06.027 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:17:06.027 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:06.027 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:17:06.027 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:17:06.027 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:17:06.296 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:17:06.296 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:17:06.296 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:06.296 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:17:06.296 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:06.296 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:17:06.296 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:17:06.296 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:17:06.296 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:17:06.296 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:06.296 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:17:06.296 21:16:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.296 21:16:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:06.296 [2024-07-14 21:16:17.773100] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:06.296 [2024-07-14 21:16:17.816963] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:06.296 [2024-07-14 21:16:17.818219] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:06.296 [2024-07-14 21:16:17.824930] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:06.296 [2024-07-14 21:16:17.825291] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:06.296 [2024-07-14 21:16:17.825313] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:06.296 21:16:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.296 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:06.296 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:17:06.296 21:16:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.296 21:16:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:06.573 [2024-07-14 21:16:17.840007] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:06.573 [2024-07-14 21:16:17.884905] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:06.573 [2024-07-14 21:16:17.886118] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:06.573 [2024-07-14 21:16:17.894080] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:06.573 [2024-07-14 21:16:17.894454] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:06.573 [2024-07-14 21:16:17.894469] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:06.573 21:16:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.573 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:06.573 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:17:06.573 21:16:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.573 21:16:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:06.573 [2024-07-14 21:16:17.901911] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:17:06.573 [2024-07-14 21:16:17.931279] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:06.573 [2024-07-14 21:16:17.932919] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:17:06.573 [2024-07-14 21:16:17.936864] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:06.573 [2024-07-14 21:16:17.937199] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:17:06.573 [2024-07-14 21:16:17.937217] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:17:06.573 21:16:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.573 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:06.573 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:17:06.573 21:16:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.573 21:16:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:06.573 [2024-07-14 21:16:17.949105] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:17:06.573 [2024-07-14 21:16:17.975379] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:06.573 [2024-07-14 21:16:17.979218] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:17:06.573 [2024-07-14 21:16:17.984925] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:06.573 [2024-07-14 21:16:17.985304] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:17:06.573 [2024-07-14 21:16:17.985319] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:17:06.573 21:16:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.573 21:16:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:17:06.841 [2024-07-14 21:16:18.236988] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:17:06.841 [2024-07-14 21:16:18.242872] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:17:06.841 [2024-07-14 21:16:18.242913] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:06.841 21:16:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:17:06.841 21:16:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:06.841 21:16:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:06.841 21:16:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.841 21:16:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.100 21:16:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.100 21:16:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.100 21:16:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:07.100 21:16:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.100 21:16:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.359 21:16:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.359 21:16:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.359 21:16:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:07.359 21:16:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.359 21:16:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.618 21:16:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.618 21:16:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.618 21:16:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:17:07.618 21:16:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.618 21:16:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.877 21:16:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.877 21:16:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:17:07.877 21:16:19 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:07.877 21:16:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.877 21:16:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.877 21:16:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.877 21:16:19 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:07.877 21:16:19 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:17:07.877 21:16:19 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:07.877 21:16:19 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:07.877 21:16:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.877 21:16:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.877 21:16:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.877 21:16:19 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:07.877 21:16:19 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:17:08.137 ************************************ 00:17:08.137 END TEST test_create_multi_ublk 00:17:08.137 ************************************ 00:17:08.137 21:16:19 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:08.137 00:17:08.137 real 0m3.809s 00:17:08.137 user 0m1.325s 00:17:08.137 sys 0m0.154s 00:17:08.137 21:16:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:08.137 21:16:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:08.137 21:16:19 ublk -- common/autotest_common.sh@1142 -- # return 0 00:17:08.137 21:16:19 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:17:08.137 21:16:19 ublk -- ublk/ublk.sh@147 -- # cleanup 00:17:08.137 21:16:19 ublk -- ublk/ublk.sh@130 -- # killprocess 77326 00:17:08.137 21:16:19 ublk -- common/autotest_common.sh@948 -- # '[' -z 77326 ']' 00:17:08.137 21:16:19 ublk -- common/autotest_common.sh@952 -- # kill -0 77326 00:17:08.137 21:16:19 ublk -- common/autotest_common.sh@953 -- # uname 00:17:08.137 21:16:19 ublk -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:08.137 21:16:19 ublk -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77326 00:17:08.137 killing process with pid 77326 00:17:08.137 21:16:19 ublk -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:08.137 21:16:19 ublk -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:08.137 21:16:19 ublk -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77326' 00:17:08.137 21:16:19 ublk -- common/autotest_common.sh@967 -- # kill 77326 00:17:08.137 21:16:19 ublk -- common/autotest_common.sh@972 -- # wait 77326 00:17:09.074 [2024-07-14 21:16:20.393500] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:17:09.074 [2024-07-14 21:16:20.393561] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:17:10.012 00:17:10.012 real 0m25.896s 00:17:10.012 user 0m39.341s 00:17:10.012 sys 0m7.863s 00:17:10.012 21:16:21 ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:10.012 21:16:21 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:10.012 ************************************ 00:17:10.012 END TEST ublk 00:17:10.012 ************************************ 00:17:10.012 21:16:21 -- common/autotest_common.sh@1142 -- # return 0 00:17:10.012 21:16:21 -- spdk/autotest.sh@252 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:10.012 21:16:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:10.012 21:16:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:10.012 21:16:21 -- common/autotest_common.sh@10 -- # set +x 00:17:10.012 ************************************ 00:17:10.012 START TEST ublk_recovery 00:17:10.012 ************************************ 00:17:10.012 21:16:21 ublk_recovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:10.012 * Looking for test storage... 00:17:10.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:10.012 21:16:21 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:10.012 21:16:21 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:10.012 21:16:21 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:10.012 21:16:21 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:10.012 21:16:21 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:10.013 21:16:21 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:10.013 21:16:21 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:10.013 21:16:21 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:10.013 21:16:21 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:10.013 21:16:21 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:17:10.013 21:16:21 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=77708 00:17:10.013 21:16:21 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:10.013 21:16:21 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:10.013 21:16:21 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 77708 00:17:10.013 21:16:21 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 77708 ']' 00:17:10.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.013 21:16:21 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.013 21:16:21 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:10.013 21:16:21 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.013 21:16:21 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:10.013 21:16:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:10.272 [2024-07-14 21:16:21.661214] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:10.272 [2024-07-14 21:16:21.661397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77708 ] 00:17:10.531 [2024-07-14 21:16:21.832599] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:10.531 [2024-07-14 21:16:22.003226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.531 [2024-07-14 21:16:22.003231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.468 21:16:22 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:11.468 21:16:22 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:17:11.468 21:16:22 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:17:11.468 21:16:22 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.468 21:16:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.468 [2024-07-14 21:16:22.652921] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:11.468 [2024-07-14 21:16:22.655257] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:11.468 21:16:22 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.468 21:16:22 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:11.468 21:16:22 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.468 21:16:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.468 malloc0 00:17:11.468 21:16:22 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.468 21:16:22 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:17:11.468 21:16:22 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.468 21:16:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.468 [2024-07-14 21:16:22.765355] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:17:11.468 [2024-07-14 21:16:22.765506] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:17:11.468 [2024-07-14 21:16:22.765522] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:11.468 [2024-07-14 21:16:22.765533] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:11.468 [2024-07-14 21:16:22.774046] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:11.468 [2024-07-14 21:16:22.774097] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:11.468 [2024-07-14 21:16:22.780943] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:11.468 [2024-07-14 21:16:22.781130] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:11.468 [2024-07-14 21:16:22.795889] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:11.468 1 00:17:11.468 21:16:22 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.468 21:16:22 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:17:12.403 21:16:23 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=77743 00:17:12.403 21:16:23 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:17:12.403 21:16:23 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:17:12.403 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:12.403 fio-3.35 00:17:12.403 Starting 1 process 00:17:17.673 21:16:28 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 77708 00:17:17.673 21:16:28 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:22.942 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 77708 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:22.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.942 21:16:33 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=77849 00:17:22.942 21:16:33 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:22.942 21:16:33 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:22.942 21:16:33 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 77849 00:17:22.942 21:16:33 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 77849 ']' 00:17:22.942 21:16:33 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.942 21:16:33 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.942 21:16:33 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.942 21:16:33 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.942 21:16:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:22.942 [2024-07-14 21:16:33.932950] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:22.942 [2024-07-14 21:16:33.933144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77849 ] 00:17:22.942 [2024-07-14 21:16:34.101013] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:22.942 [2024-07-14 21:16:34.304163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.942 [2024-07-14 21:16:34.304168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.509 21:16:34 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.509 21:16:34 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:17:23.509 21:16:34 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:23.509 21:16:34 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.509 21:16:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.510 [2024-07-14 21:16:34.930881] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:23.510 [2024-07-14 21:16:34.933269] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:23.510 21:16:34 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.510 21:16:34 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:23.510 21:16:34 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.510 21:16:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.510 malloc0 00:17:23.510 21:16:35 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.510 21:16:35 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:23.510 21:16:35 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.510 21:16:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.510 [2024-07-14 21:16:35.049127] ublk.c:2095:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:23.510 [2024-07-14 21:16:35.049230] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:23.510 [2024-07-14 21:16:35.049243] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:23.768 [2024-07-14 21:16:35.056104] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:23.768 [2024-07-14 21:16:35.056145] ublk.c:2024:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:23.768 [2024-07-14 21:16:35.056287] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:23.768 1 00:17:23.768 21:16:35 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.768 21:16:35 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 77743 00:17:23.768 [2024-07-14 21:16:35.063955] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:23.768 [2024-07-14 21:16:35.070751] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:23.768 [2024-07-14 21:16:35.077143] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:23.768 [2024-07-14 21:16:35.077191] ublk.c: 378:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:18:20.049 00:18:20.049 fio_test: (groupid=0, jobs=1): err= 0: pid=77752: Sun Jul 14 21:17:24 2024 00:18:20.049 read: IOPS=19.4k, BW=75.6MiB/s (79.3MB/s)(4536MiB/60001msec) 00:18:20.049 slat (nsec): min=1787, max=211253, avg=6225.84, stdev=3106.58 00:18:20.049 clat (usec): min=1166, max=6279.7k, avg=3259.28, stdev=47316.04 00:18:20.049 lat (usec): min=1183, max=6279.7k, avg=3265.51, stdev=47316.04 00:18:20.049 clat percentiles (usec): 00:18:20.049 | 1.00th=[ 2376], 5.00th=[ 2540], 10.00th=[ 2573], 20.00th=[ 2638], 00:18:20.049 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2769], 60.00th=[ 2835], 00:18:20.049 | 70.00th=[ 2900], 80.00th=[ 2999], 90.00th=[ 3163], 95.00th=[ 3818], 00:18:20.049 | 99.00th=[ 5473], 99.50th=[ 6128], 99.90th=[ 7439], 99.95th=[ 8356], 00:18:20.049 | 99.99th=[12780] 00:18:20.049 bw ( KiB/s): min=39544, max=92424, per=100.00%, avg=86097.46, stdev=7176.35, samples=107 00:18:20.049 iops : min= 9886, max=23106, avg=21524.35, stdev=1794.08, samples=107 00:18:20.049 write: IOPS=19.3k, BW=75.6MiB/s (79.2MB/s)(4534MiB/60001msec); 0 zone resets 00:18:20.049 slat (nsec): min=1847, max=1290.7k, avg=6302.23, stdev=3404.89 00:18:20.049 clat (usec): min=923, max=6280.0k, avg=3342.06, stdev=45872.17 00:18:20.049 lat (usec): min=943, max=6280.0k, avg=3348.37, stdev=45872.17 00:18:20.049 clat percentiles (usec): 00:18:20.049 | 1.00th=[ 2474], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2769], 00:18:20.049 | 30.00th=[ 2802], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:18:20.049 | 70.00th=[ 2999], 80.00th=[ 3097], 90.00th=[ 3261], 95.00th=[ 3687], 00:18:20.049 | 99.00th=[ 5473], 99.50th=[ 6194], 99.90th=[ 7504], 99.95th=[ 8160], 00:18:20.049 | 99.99th=[12780] 00:18:20.049 bw ( KiB/s): min=38544, max=91328, per=100.00%, avg=86056.59, stdev=7226.29, samples=107 00:18:20.049 iops : min= 9636, max=22832, avg=21514.12, stdev=1806.57, samples=107 00:18:20.049 lat (usec) : 1000=0.01% 00:18:20.049 lat (msec) : 2=0.12%, 4=95.69%, 10=4.17%, 20=0.02%, >=2000=0.01% 00:18:20.049 cpu : usr=10.30%, sys=22.53%, ctx=68083, majf=0, minf=13 00:18:20.049 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:20.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:20.049 issued rwts: total=1161251,1160653,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.049 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:20.049 00:18:20.049 Run status group 0 (all jobs): 00:18:20.049 READ: bw=75.6MiB/s (79.3MB/s), 75.6MiB/s-75.6MiB/s (79.3MB/s-79.3MB/s), io=4536MiB (4756MB), run=60001-60001msec 00:18:20.049 WRITE: bw=75.6MiB/s (79.2MB/s), 75.6MiB/s-75.6MiB/s (79.2MB/s-79.2MB/s), io=4534MiB (4754MB), run=60001-60001msec 00:18:20.049 00:18:20.049 Disk stats (read/write): 00:18:20.049 ublkb1: ios=1158789/1158197, merge=0/0, ticks=3672179/3639141, in_queue=7311320, util=99.95% 00:18:20.049 21:17:24 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:18:20.049 21:17:24 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.049 21:17:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.049 [2024-07-14 21:17:24.065984] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:20.049 [2024-07-14 21:17:24.106984] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:20.049 [2024-07-14 21:17:24.107258] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:20.049 [2024-07-14 21:17:24.115953] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:20.049 [2024-07-14 21:17:24.116088] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:20.049 [2024-07-14 21:17:24.116104] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:20.049 21:17:24 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.049 21:17:24 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:18:20.049 21:17:24 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.049 21:17:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.049 [2024-07-14 21:17:24.131012] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:18:20.049 [2024-07-14 21:17:24.137958] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:18:20.049 [2024-07-14 21:17:24.137999] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:20.049 21:17:24 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.049 21:17:24 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:18:20.049 21:17:24 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:18:20.050 21:17:24 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 77849 00:18:20.050 21:17:24 ublk_recovery -- common/autotest_common.sh@948 -- # '[' -z 77849 ']' 00:18:20.050 21:17:24 ublk_recovery -- common/autotest_common.sh@952 -- # kill -0 77849 00:18:20.050 21:17:24 ublk_recovery -- common/autotest_common.sh@953 -- # uname 00:18:20.050 21:17:24 ublk_recovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:20.050 21:17:24 ublk_recovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77849 00:18:20.050 killing process with pid 77849 00:18:20.050 21:17:24 ublk_recovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:20.050 21:17:24 ublk_recovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:20.050 21:17:24 ublk_recovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77849' 00:18:20.050 21:17:24 ublk_recovery -- common/autotest_common.sh@967 -- # kill 77849 00:18:20.050 21:17:24 ublk_recovery -- common/autotest_common.sh@972 -- # wait 77849 00:18:20.050 [2024-07-14 21:17:25.000754] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:18:20.050 [2024-07-14 21:17:25.000841] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:18:20.050 00:18:20.050 real 1m4.652s 00:18:20.050 user 1m45.922s 00:18:20.050 sys 0m32.065s 00:18:20.050 ************************************ 00:18:20.050 END TEST ublk_recovery 00:18:20.050 ************************************ 00:18:20.050 21:17:26 ublk_recovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:20.050 21:17:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.050 21:17:26 -- common/autotest_common.sh@1142 -- # return 0 00:18:20.050 21:17:26 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:20.050 21:17:26 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:20.050 21:17:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:20.050 21:17:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.050 21:17:26 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:20.050 21:17:26 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:18:20.050 21:17:26 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:18:20.050 21:17:26 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:18:20.050 21:17:26 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:18:20.050 21:17:26 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:18:20.050 21:17:26 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:18:20.050 21:17:26 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:18:20.050 21:17:26 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:18:20.050 21:17:26 -- spdk/autotest.sh@339 -- # '[' 1 -eq 1 ']' 00:18:20.050 21:17:26 -- spdk/autotest.sh@340 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:20.050 21:17:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:20.050 21:17:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:20.050 21:17:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.050 ************************************ 00:18:20.050 START TEST ftl 00:18:20.050 ************************************ 00:18:20.050 21:17:26 ftl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:20.050 * Looking for test storage... 00:18:20.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:20.050 21:17:26 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:20.050 21:17:26 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:20.050 21:17:26 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:20.050 21:17:26 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:20.050 21:17:26 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:20.050 21:17:26 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:20.050 21:17:26 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:20.050 21:17:26 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:20.050 21:17:26 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:20.050 21:17:26 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:20.050 21:17:26 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:20.050 21:17:26 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:20.050 21:17:26 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:20.050 21:17:26 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:20.050 21:17:26 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:20.050 21:17:26 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:20.050 21:17:26 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:20.050 21:17:26 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:20.050 21:17:26 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:20.050 21:17:26 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:20.050 21:17:26 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:20.050 21:17:26 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:20.050 21:17:26 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:20.050 21:17:26 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:20.050 21:17:26 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:20.050 21:17:26 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:20.050 21:17:26 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:20.050 21:17:26 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:20.050 21:17:26 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:20.050 21:17:26 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:20.050 21:17:26 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:18:20.050 21:17:26 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:18:20.050 21:17:26 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:18:20.050 21:17:26 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:18:20.050 21:17:26 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:20.050 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:20.050 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:20.050 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:20.050 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:20.050 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:20.050 21:17:26 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=78630 00:18:20.050 21:17:26 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:20.050 21:17:26 ftl -- ftl/ftl.sh@38 -- # waitforlisten 78630 00:18:20.050 21:17:26 ftl -- common/autotest_common.sh@829 -- # '[' -z 78630 ']' 00:18:20.050 21:17:26 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.050 21:17:26 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:20.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.050 21:17:26 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.050 21:17:26 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:20.050 21:17:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:20.050 [2024-07-14 21:17:26.956163] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:20.050 [2024-07-14 21:17:26.956327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78630 ] 00:18:20.050 [2024-07-14 21:17:27.131181] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.050 [2024-07-14 21:17:27.362441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.050 21:17:27 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.050 21:17:27 ftl -- common/autotest_common.sh@862 -- # return 0 00:18:20.050 21:17:27 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:18:20.050 21:17:28 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:20.050 21:17:28 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:18:20.050 21:17:28 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:20.050 21:17:29 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:18:20.050 21:17:29 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:20.050 21:17:29 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:20.050 21:17:29 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:20.050 21:17:29 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:20.050 21:17:29 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:20.050 21:17:29 ftl -- ftl/ftl.sh@50 -- # break 00:18:20.050 21:17:29 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:20.050 21:17:29 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:20.050 21:17:29 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:20.050 21:17:29 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:20.050 21:17:29 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:20.050 21:17:29 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:20.050 21:17:29 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:20.050 21:17:29 ftl -- ftl/ftl.sh@63 -- # break 00:18:20.050 21:17:29 ftl -- ftl/ftl.sh@66 -- # killprocess 78630 00:18:20.050 21:17:29 ftl -- common/autotest_common.sh@948 -- # '[' -z 78630 ']' 00:18:20.050 21:17:29 ftl -- common/autotest_common.sh@952 -- # kill -0 78630 00:18:20.050 21:17:29 ftl -- common/autotest_common.sh@953 -- # uname 00:18:20.050 21:17:29 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:20.050 21:17:29 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78630 00:18:20.050 21:17:29 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:20.050 killing process with pid 78630 00:18:20.050 21:17:29 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:20.050 21:17:29 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78630' 00:18:20.050 21:17:29 ftl -- common/autotest_common.sh@967 -- # kill 78630 00:18:20.050 21:17:29 ftl -- common/autotest_common.sh@972 -- # wait 78630 00:18:20.309 21:17:31 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:20.309 21:17:31 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:20.309 21:17:31 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:20.309 21:17:31 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:20.309 21:17:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:20.309 ************************************ 00:18:20.309 START TEST ftl_fio_basic 00:18:20.309 ************************************ 00:18:20.309 21:17:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:20.309 * Looking for test storage... 00:18:20.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:20.309 21:17:31 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:20.309 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:20.309 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:20.309 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:20.309 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:20.309 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:20.309 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:20.309 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:20.309 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=78760 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 78760 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- common/autotest_common.sh@829 -- # '[' -z 78760 ']' 00:18:20.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:20.310 21:17:31 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:20.568 [2024-07-14 21:17:31.957445] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:20.568 [2024-07-14 21:17:31.957621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78760 ] 00:18:20.827 [2024-07-14 21:17:32.127337] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:20.827 [2024-07-14 21:17:32.284711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.827 [2024-07-14 21:17:32.284905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.827 [2024-07-14 21:17:32.285111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.393 21:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:21.393 21:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # return 0 00:18:21.393 21:17:32 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:21.393 21:17:32 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:21.393 21:17:32 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:21.393 21:17:32 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:21.393 21:17:32 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:21.393 21:17:32 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:21.957 21:17:33 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:21.957 21:17:33 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:21.957 21:17:33 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:21.957 21:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:21.957 21:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:21.957 21:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:18:21.957 21:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:18:21.957 21:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:22.215 21:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:22.215 { 00:18:22.215 "name": "nvme0n1", 00:18:22.215 "aliases": [ 00:18:22.215 "c61f7dd5-4714-4b83-bc91-3493bf04c3c2" 00:18:22.215 ], 00:18:22.215 "product_name": "NVMe disk", 00:18:22.215 "block_size": 4096, 00:18:22.215 "num_blocks": 1310720, 00:18:22.215 "uuid": "c61f7dd5-4714-4b83-bc91-3493bf04c3c2", 00:18:22.215 "assigned_rate_limits": { 00:18:22.215 "rw_ios_per_sec": 0, 00:18:22.215 "rw_mbytes_per_sec": 0, 00:18:22.215 "r_mbytes_per_sec": 0, 00:18:22.215 "w_mbytes_per_sec": 0 00:18:22.215 }, 00:18:22.215 "claimed": false, 00:18:22.215 "zoned": false, 00:18:22.215 "supported_io_types": { 00:18:22.215 "read": true, 00:18:22.215 "write": true, 00:18:22.215 "unmap": true, 00:18:22.215 "flush": true, 00:18:22.215 "reset": true, 00:18:22.215 "nvme_admin": true, 00:18:22.215 "nvme_io": true, 00:18:22.215 "nvme_io_md": false, 00:18:22.215 "write_zeroes": true, 00:18:22.215 "zcopy": false, 00:18:22.215 "get_zone_info": false, 00:18:22.215 "zone_management": false, 00:18:22.215 "zone_append": false, 00:18:22.215 "compare": true, 00:18:22.215 "compare_and_write": false, 00:18:22.215 "abort": true, 00:18:22.215 "seek_hole": false, 00:18:22.215 "seek_data": false, 00:18:22.215 "copy": true, 00:18:22.215 "nvme_iov_md": false 00:18:22.215 }, 00:18:22.215 "driver_specific": { 00:18:22.215 "nvme": [ 00:18:22.215 { 00:18:22.215 "pci_address": "0000:00:11.0", 00:18:22.215 "trid": { 00:18:22.215 "trtype": "PCIe", 00:18:22.215 "traddr": "0000:00:11.0" 00:18:22.215 }, 00:18:22.215 "ctrlr_data": { 00:18:22.215 "cntlid": 0, 00:18:22.215 "vendor_id": "0x1b36", 00:18:22.215 "model_number": "QEMU NVMe Ctrl", 00:18:22.215 "serial_number": "12341", 00:18:22.215 "firmware_revision": "8.0.0", 00:18:22.215 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:22.215 "oacs": { 00:18:22.215 "security": 0, 00:18:22.215 "format": 1, 00:18:22.215 "firmware": 0, 00:18:22.215 "ns_manage": 1 00:18:22.215 }, 00:18:22.215 "multi_ctrlr": false, 00:18:22.215 "ana_reporting": false 00:18:22.215 }, 00:18:22.215 "vs": { 00:18:22.215 "nvme_version": "1.4" 00:18:22.215 }, 00:18:22.215 "ns_data": { 00:18:22.215 "id": 1, 00:18:22.215 "can_share": false 00:18:22.215 } 00:18:22.215 } 00:18:22.215 ], 00:18:22.215 "mp_policy": "active_passive" 00:18:22.215 } 00:18:22.215 } 00:18:22.215 ]' 00:18:22.215 21:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:22.215 21:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:18:22.215 21:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:22.215 21:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:22.215 21:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:22.215 21:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:18:22.215 21:17:33 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:22.215 21:17:33 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:22.215 21:17:33 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:22.215 21:17:33 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:22.215 21:17:33 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:22.472 21:17:33 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:22.472 21:17:33 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:22.729 21:17:34 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=b088627f-a024-43f0-b9b0-88441ff09939 00:18:22.729 21:17:34 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b088627f-a024-43f0-b9b0-88441ff09939 00:18:22.985 21:17:34 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=0ed9577d-c813-41cf-adef-d3ccac769075 00:18:22.985 21:17:34 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0ed9577d-c813-41cf-adef-d3ccac769075 00:18:22.985 21:17:34 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:22.985 21:17:34 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:22.985 21:17:34 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=0ed9577d-c813-41cf-adef-d3ccac769075 00:18:22.985 21:17:34 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:22.985 21:17:34 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 0ed9577d-c813-41cf-adef-d3ccac769075 00:18:22.985 21:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=0ed9577d-c813-41cf-adef-d3ccac769075 00:18:22.985 21:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:22.985 21:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:18:22.985 21:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:18:22.985 21:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0ed9577d-c813-41cf-adef-d3ccac769075 00:18:23.243 21:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:23.243 { 00:18:23.243 "name": "0ed9577d-c813-41cf-adef-d3ccac769075", 00:18:23.243 "aliases": [ 00:18:23.243 "lvs/nvme0n1p0" 00:18:23.243 ], 00:18:23.243 "product_name": "Logical Volume", 00:18:23.243 "block_size": 4096, 00:18:23.243 "num_blocks": 26476544, 00:18:23.243 "uuid": "0ed9577d-c813-41cf-adef-d3ccac769075", 00:18:23.243 "assigned_rate_limits": { 00:18:23.243 "rw_ios_per_sec": 0, 00:18:23.243 "rw_mbytes_per_sec": 0, 00:18:23.243 "r_mbytes_per_sec": 0, 00:18:23.243 "w_mbytes_per_sec": 0 00:18:23.243 }, 00:18:23.243 "claimed": false, 00:18:23.243 "zoned": false, 00:18:23.243 "supported_io_types": { 00:18:23.243 "read": true, 00:18:23.243 "write": true, 00:18:23.243 "unmap": true, 00:18:23.243 "flush": false, 00:18:23.243 "reset": true, 00:18:23.243 "nvme_admin": false, 00:18:23.243 "nvme_io": false, 00:18:23.243 "nvme_io_md": false, 00:18:23.243 "write_zeroes": true, 00:18:23.243 "zcopy": false, 00:18:23.243 "get_zone_info": false, 00:18:23.243 "zone_management": false, 00:18:23.243 "zone_append": false, 00:18:23.243 "compare": false, 00:18:23.243 "compare_and_write": false, 00:18:23.243 "abort": false, 00:18:23.243 "seek_hole": true, 00:18:23.243 "seek_data": true, 00:18:23.243 "copy": false, 00:18:23.243 "nvme_iov_md": false 00:18:23.243 }, 00:18:23.243 "driver_specific": { 00:18:23.243 "lvol": { 00:18:23.243 "lvol_store_uuid": "b088627f-a024-43f0-b9b0-88441ff09939", 00:18:23.243 "base_bdev": "nvme0n1", 00:18:23.243 "thin_provision": true, 00:18:23.243 "num_allocated_clusters": 0, 00:18:23.243 "snapshot": false, 00:18:23.243 "clone": false, 00:18:23.243 "esnap_clone": false 00:18:23.243 } 00:18:23.243 } 00:18:23.243 } 00:18:23.243 ]' 00:18:23.243 21:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:23.243 21:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:18:23.243 21:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:23.243 21:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:23.243 21:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:23.243 21:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:18:23.243 21:17:34 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:23.243 21:17:34 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:23.243 21:17:34 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:23.808 21:17:35 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:23.808 21:17:35 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:23.808 21:17:35 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 0ed9577d-c813-41cf-adef-d3ccac769075 00:18:23.808 21:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=0ed9577d-c813-41cf-adef-d3ccac769075 00:18:23.808 21:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:23.808 21:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:18:23.808 21:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:18:23.808 21:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0ed9577d-c813-41cf-adef-d3ccac769075 00:18:23.808 21:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:23.808 { 00:18:23.808 "name": "0ed9577d-c813-41cf-adef-d3ccac769075", 00:18:23.808 "aliases": [ 00:18:23.808 "lvs/nvme0n1p0" 00:18:23.808 ], 00:18:23.808 "product_name": "Logical Volume", 00:18:23.808 "block_size": 4096, 00:18:23.808 "num_blocks": 26476544, 00:18:23.808 "uuid": "0ed9577d-c813-41cf-adef-d3ccac769075", 00:18:23.808 "assigned_rate_limits": { 00:18:23.808 "rw_ios_per_sec": 0, 00:18:23.808 "rw_mbytes_per_sec": 0, 00:18:23.808 "r_mbytes_per_sec": 0, 00:18:23.808 "w_mbytes_per_sec": 0 00:18:23.808 }, 00:18:23.808 "claimed": false, 00:18:23.808 "zoned": false, 00:18:23.808 "supported_io_types": { 00:18:23.808 "read": true, 00:18:23.808 "write": true, 00:18:23.808 "unmap": true, 00:18:23.808 "flush": false, 00:18:23.808 "reset": true, 00:18:23.808 "nvme_admin": false, 00:18:23.808 "nvme_io": false, 00:18:23.808 "nvme_io_md": false, 00:18:23.808 "write_zeroes": true, 00:18:23.808 "zcopy": false, 00:18:23.808 "get_zone_info": false, 00:18:23.808 "zone_management": false, 00:18:23.808 "zone_append": false, 00:18:23.808 "compare": false, 00:18:23.808 "compare_and_write": false, 00:18:23.808 "abort": false, 00:18:23.808 "seek_hole": true, 00:18:23.808 "seek_data": true, 00:18:23.808 "copy": false, 00:18:23.808 "nvme_iov_md": false 00:18:23.808 }, 00:18:23.808 "driver_specific": { 00:18:23.808 "lvol": { 00:18:23.808 "lvol_store_uuid": "b088627f-a024-43f0-b9b0-88441ff09939", 00:18:23.808 "base_bdev": "nvme0n1", 00:18:23.808 "thin_provision": true, 00:18:23.808 "num_allocated_clusters": 0, 00:18:23.808 "snapshot": false, 00:18:23.808 "clone": false, 00:18:23.808 "esnap_clone": false 00:18:23.808 } 00:18:23.808 } 00:18:23.808 } 00:18:23.808 ]' 00:18:23.808 21:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:23.808 21:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:18:23.808 21:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:24.065 21:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:24.065 21:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:24.065 21:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:18:24.065 21:17:35 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:24.065 21:17:35 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:24.323 21:17:35 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:24.323 21:17:35 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:24.323 21:17:35 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:24.323 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:24.323 21:17:35 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 0ed9577d-c813-41cf-adef-d3ccac769075 00:18:24.323 21:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=0ed9577d-c813-41cf-adef-d3ccac769075 00:18:24.323 21:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:24.323 21:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:18:24.323 21:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:18:24.323 21:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0ed9577d-c813-41cf-adef-d3ccac769075 00:18:24.580 21:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:24.580 { 00:18:24.580 "name": "0ed9577d-c813-41cf-adef-d3ccac769075", 00:18:24.580 "aliases": [ 00:18:24.580 "lvs/nvme0n1p0" 00:18:24.580 ], 00:18:24.580 "product_name": "Logical Volume", 00:18:24.580 "block_size": 4096, 00:18:24.580 "num_blocks": 26476544, 00:18:24.580 "uuid": "0ed9577d-c813-41cf-adef-d3ccac769075", 00:18:24.580 "assigned_rate_limits": { 00:18:24.580 "rw_ios_per_sec": 0, 00:18:24.580 "rw_mbytes_per_sec": 0, 00:18:24.581 "r_mbytes_per_sec": 0, 00:18:24.581 "w_mbytes_per_sec": 0 00:18:24.581 }, 00:18:24.581 "claimed": false, 00:18:24.581 "zoned": false, 00:18:24.581 "supported_io_types": { 00:18:24.581 "read": true, 00:18:24.581 "write": true, 00:18:24.581 "unmap": true, 00:18:24.581 "flush": false, 00:18:24.581 "reset": true, 00:18:24.581 "nvme_admin": false, 00:18:24.581 "nvme_io": false, 00:18:24.581 "nvme_io_md": false, 00:18:24.581 "write_zeroes": true, 00:18:24.581 "zcopy": false, 00:18:24.581 "get_zone_info": false, 00:18:24.581 "zone_management": false, 00:18:24.581 "zone_append": false, 00:18:24.581 "compare": false, 00:18:24.581 "compare_and_write": false, 00:18:24.581 "abort": false, 00:18:24.581 "seek_hole": true, 00:18:24.581 "seek_data": true, 00:18:24.581 "copy": false, 00:18:24.581 "nvme_iov_md": false 00:18:24.581 }, 00:18:24.581 "driver_specific": { 00:18:24.581 "lvol": { 00:18:24.581 "lvol_store_uuid": "b088627f-a024-43f0-b9b0-88441ff09939", 00:18:24.581 "base_bdev": "nvme0n1", 00:18:24.581 "thin_provision": true, 00:18:24.581 "num_allocated_clusters": 0, 00:18:24.581 "snapshot": false, 00:18:24.581 "clone": false, 00:18:24.581 "esnap_clone": false 00:18:24.581 } 00:18:24.581 } 00:18:24.581 } 00:18:24.581 ]' 00:18:24.581 21:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:24.581 21:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:18:24.581 21:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:24.581 21:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:24.581 21:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:24.581 21:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:18:24.581 21:17:35 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:24.581 21:17:35 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:24.581 21:17:35 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0ed9577d-c813-41cf-adef-d3ccac769075 -c nvc0n1p0 --l2p_dram_limit 60 00:18:24.839 [2024-07-14 21:17:36.203314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.840 [2024-07-14 21:17:36.203392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:24.840 [2024-07-14 21:17:36.203430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:24.840 [2024-07-14 21:17:36.203443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.840 [2024-07-14 21:17:36.203528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.840 [2024-07-14 21:17:36.203548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:24.840 [2024-07-14 21:17:36.203560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:24.840 [2024-07-14 21:17:36.203573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.840 [2024-07-14 21:17:36.203607] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:24.840 [2024-07-14 21:17:36.204619] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:24.840 [2024-07-14 21:17:36.204653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.840 [2024-07-14 21:17:36.204672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:24.840 [2024-07-14 21:17:36.204685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.053 ms 00:18:24.840 [2024-07-14 21:17:36.204698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.840 [2024-07-14 21:17:36.204855] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c37facfd-dd97-400b-8ac2-624a64fa44e0 00:18:24.840 [2024-07-14 21:17:36.205958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.840 [2024-07-14 21:17:36.205998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:24.840 [2024-07-14 21:17:36.206018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:24.840 [2024-07-14 21:17:36.206030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.840 [2024-07-14 21:17:36.210597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.840 [2024-07-14 21:17:36.210643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:24.840 [2024-07-14 21:17:36.210678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.474 ms 00:18:24.840 [2024-07-14 21:17:36.210692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.840 [2024-07-14 21:17:36.210872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.840 [2024-07-14 21:17:36.210894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:24.840 [2024-07-14 21:17:36.210909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:18:24.840 [2024-07-14 21:17:36.210920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.840 [2024-07-14 21:17:36.211036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.840 [2024-07-14 21:17:36.211054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:24.840 [2024-07-14 21:17:36.211069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:24.840 [2024-07-14 21:17:36.211081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.840 [2024-07-14 21:17:36.211130] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:24.840 [2024-07-14 21:17:36.215660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.840 [2024-07-14 21:17:36.215700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:24.840 [2024-07-14 21:17:36.215735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.543 ms 00:18:24.840 [2024-07-14 21:17:36.215748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.840 [2024-07-14 21:17:36.215800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.840 [2024-07-14 21:17:36.215867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:24.840 [2024-07-14 21:17:36.215883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:24.840 [2024-07-14 21:17:36.215896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.840 [2024-07-14 21:17:36.215982] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:24.840 [2024-07-14 21:17:36.216178] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:24.840 [2024-07-14 21:17:36.216201] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:24.840 [2024-07-14 21:17:36.216225] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:24.840 [2024-07-14 21:17:36.216242] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:24.840 [2024-07-14 21:17:36.216258] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:24.840 [2024-07-14 21:17:36.216271] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:24.840 [2024-07-14 21:17:36.216286] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:24.840 [2024-07-14 21:17:36.216297] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:24.840 [2024-07-14 21:17:36.216312] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:24.840 [2024-07-14 21:17:36.216325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.840 [2024-07-14 21:17:36.216338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:24.840 [2024-07-14 21:17:36.216351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:18:24.840 [2024-07-14 21:17:36.216363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.840 [2024-07-14 21:17:36.216477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.840 [2024-07-14 21:17:36.216496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:24.840 [2024-07-14 21:17:36.216508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:24.840 [2024-07-14 21:17:36.216521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.840 [2024-07-14 21:17:36.216641] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:24.840 [2024-07-14 21:17:36.216665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:24.840 [2024-07-14 21:17:36.216678] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:24.840 [2024-07-14 21:17:36.216692] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.840 [2024-07-14 21:17:36.216703] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:24.840 [2024-07-14 21:17:36.216716] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:24.840 [2024-07-14 21:17:36.216726] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:24.840 [2024-07-14 21:17:36.216740] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:24.840 [2024-07-14 21:17:36.216750] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:24.840 [2024-07-14 21:17:36.216763] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:24.840 [2024-07-14 21:17:36.216774] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:24.840 [2024-07-14 21:17:36.216786] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:24.840 [2024-07-14 21:17:36.216810] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:24.840 [2024-07-14 21:17:36.216830] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:24.840 [2024-07-14 21:17:36.216841] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:24.840 [2024-07-14 21:17:36.216854] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.840 [2024-07-14 21:17:36.216864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:24.840 [2024-07-14 21:17:36.216878] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:24.840 [2024-07-14 21:17:36.216889] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.840 [2024-07-14 21:17:36.216902] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:24.840 [2024-07-14 21:17:36.216913] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:24.840 [2024-07-14 21:17:36.216929] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:24.840 [2024-07-14 21:17:36.216940] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:24.840 [2024-07-14 21:17:36.216952] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:24.840 [2024-07-14 21:17:36.216963] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:24.840 [2024-07-14 21:17:36.216976] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:24.840 [2024-07-14 21:17:36.216986] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:24.840 [2024-07-14 21:17:36.216998] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:24.840 [2024-07-14 21:17:36.217009] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:24.840 [2024-07-14 21:17:36.217021] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:24.840 [2024-07-14 21:17:36.217031] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:24.840 [2024-07-14 21:17:36.217043] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:24.840 [2024-07-14 21:17:36.217054] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:24.840 [2024-07-14 21:17:36.217068] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:24.840 [2024-07-14 21:17:36.217078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:24.840 [2024-07-14 21:17:36.217091] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:24.840 [2024-07-14 21:17:36.217101] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:24.840 [2024-07-14 21:17:36.217114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:24.840 [2024-07-14 21:17:36.217124] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:24.841 [2024-07-14 21:17:36.217138] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.841 [2024-07-14 21:17:36.217148] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:24.841 [2024-07-14 21:17:36.217176] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:24.841 [2024-07-14 21:17:36.217186] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.841 [2024-07-14 21:17:36.217214] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:24.841 [2024-07-14 21:17:36.217226] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:24.841 [2024-07-14 21:17:36.217259] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:24.841 [2024-07-14 21:17:36.217271] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.841 [2024-07-14 21:17:36.217285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:24.841 [2024-07-14 21:17:36.217296] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:24.841 [2024-07-14 21:17:36.217311] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:24.841 [2024-07-14 21:17:36.217322] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:24.841 [2024-07-14 21:17:36.217334] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:24.841 [2024-07-14 21:17:36.217345] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:24.841 [2024-07-14 21:17:36.217369] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:24.841 [2024-07-14 21:17:36.217385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:24.841 [2024-07-14 21:17:36.217400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:24.841 [2024-07-14 21:17:36.217412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:24.841 [2024-07-14 21:17:36.217427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:24.841 [2024-07-14 21:17:36.217438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:24.841 [2024-07-14 21:17:36.217451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:24.841 [2024-07-14 21:17:36.217463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:24.841 [2024-07-14 21:17:36.217477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:24.841 [2024-07-14 21:17:36.217488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:24.841 [2024-07-14 21:17:36.217503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:24.841 [2024-07-14 21:17:36.217514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:24.841 [2024-07-14 21:17:36.217529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:24.841 [2024-07-14 21:17:36.217541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:24.841 [2024-07-14 21:17:36.217554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:24.841 [2024-07-14 21:17:36.217566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:24.841 [2024-07-14 21:17:36.217579] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:24.841 [2024-07-14 21:17:36.217594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:24.841 [2024-07-14 21:17:36.217608] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:24.841 [2024-07-14 21:17:36.217620] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:24.841 [2024-07-14 21:17:36.217633] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:24.841 [2024-07-14 21:17:36.217660] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:24.841 [2024-07-14 21:17:36.217674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.841 [2024-07-14 21:17:36.217685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:24.841 [2024-07-14 21:17:36.217698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.097 ms 00:18:24.841 [2024-07-14 21:17:36.217709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.841 [2024-07-14 21:17:36.217825] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:24.841 [2024-07-14 21:17:36.217845] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:27.393 [2024-07-14 21:17:38.887788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.393 [2024-07-14 21:17:38.887881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:27.393 [2024-07-14 21:17:38.887905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2670.002 ms 00:18:27.393 [2024-07-14 21:17:38.887916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.393 [2024-07-14 21:17:38.917535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.393 [2024-07-14 21:17:38.917593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:27.393 [2024-07-14 21:17:38.917632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.382 ms 00:18:27.393 [2024-07-14 21:17:38.917642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.393 [2024-07-14 21:17:38.917866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.393 [2024-07-14 21:17:38.917892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:27.393 [2024-07-14 21:17:38.917924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:18:27.393 [2024-07-14 21:17:38.917935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.653 [2024-07-14 21:17:38.963636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.653 [2024-07-14 21:17:38.963698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:27.653 [2024-07-14 21:17:38.963738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.633 ms 00:18:27.653 [2024-07-14 21:17:38.963749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.653 [2024-07-14 21:17:38.963854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.653 [2024-07-14 21:17:38.963871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:27.653 [2024-07-14 21:17:38.963885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:27.653 [2024-07-14 21:17:38.963896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.653 [2024-07-14 21:17:38.964421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.653 [2024-07-14 21:17:38.964458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:27.653 [2024-07-14 21:17:38.964480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:18:27.653 [2024-07-14 21:17:38.964492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.653 [2024-07-14 21:17:38.964718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.653 [2024-07-14 21:17:38.964749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:27.653 [2024-07-14 21:17:38.964768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:18:27.653 [2024-07-14 21:17:38.964782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.653 [2024-07-14 21:17:38.985930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.653 [2024-07-14 21:17:38.985990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:27.653 [2024-07-14 21:17:38.986030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.083 ms 00:18:27.653 [2024-07-14 21:17:38.986041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.653 [2024-07-14 21:17:38.998313] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:27.653 [2024-07-14 21:17:39.011565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.653 [2024-07-14 21:17:39.011650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:27.653 [2024-07-14 21:17:39.011674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.378 ms 00:18:27.653 [2024-07-14 21:17:39.011687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.653 [2024-07-14 21:17:39.066182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.653 [2024-07-14 21:17:39.066261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:27.653 [2024-07-14 21:17:39.066297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.437 ms 00:18:27.653 [2024-07-14 21:17:39.066311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.653 [2024-07-14 21:17:39.066562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.653 [2024-07-14 21:17:39.066584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:27.653 [2024-07-14 21:17:39.066597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:18:27.653 [2024-07-14 21:17:39.066611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.653 [2024-07-14 21:17:39.095497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.653 [2024-07-14 21:17:39.095579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:27.653 [2024-07-14 21:17:39.095598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.813 ms 00:18:27.653 [2024-07-14 21:17:39.095611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.653 [2024-07-14 21:17:39.123838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.653 [2024-07-14 21:17:39.123907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:27.653 [2024-07-14 21:17:39.123927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.174 ms 00:18:27.653 [2024-07-14 21:17:39.123939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.653 [2024-07-14 21:17:39.124662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.653 [2024-07-14 21:17:39.124717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:27.653 [2024-07-14 21:17:39.124733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:18:27.653 [2024-07-14 21:17:39.124745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.912 [2024-07-14 21:17:39.224150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.912 [2024-07-14 21:17:39.224237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:27.912 [2024-07-14 21:17:39.224261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.292 ms 00:18:27.912 [2024-07-14 21:17:39.224278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.912 [2024-07-14 21:17:39.255607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.912 [2024-07-14 21:17:39.255715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:27.912 [2024-07-14 21:17:39.255736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.281 ms 00:18:27.912 [2024-07-14 21:17:39.255749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.912 [2024-07-14 21:17:39.285459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.912 [2024-07-14 21:17:39.285532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:27.912 [2024-07-14 21:17:39.285551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.651 ms 00:18:27.912 [2024-07-14 21:17:39.285564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.912 [2024-07-14 21:17:39.314692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.912 [2024-07-14 21:17:39.314803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:27.912 [2024-07-14 21:17:39.314841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.089 ms 00:18:27.912 [2024-07-14 21:17:39.314858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.912 [2024-07-14 21:17:39.314921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.912 [2024-07-14 21:17:39.314949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:27.912 [2024-07-14 21:17:39.314966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:27.912 [2024-07-14 21:17:39.314982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.912 [2024-07-14 21:17:39.315106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.912 [2024-07-14 21:17:39.315129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:27.912 [2024-07-14 21:17:39.315142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:27.912 [2024-07-14 21:17:39.315155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.912 [2024-07-14 21:17:39.316411] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3112.431 ms, result 0 00:18:27.912 { 00:18:27.912 "name": "ftl0", 00:18:27.912 "uuid": "c37facfd-dd97-400b-8ac2-624a64fa44e0" 00:18:27.912 } 00:18:27.912 21:17:39 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:27.912 21:17:39 ftl.ftl_fio_basic -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:18:27.912 21:17:39 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:27.912 21:17:39 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local i 00:18:27.912 21:17:39 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:27.912 21:17:39 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:27.912 21:17:39 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:28.170 21:17:39 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:28.429 [ 00:18:28.429 { 00:18:28.429 "name": "ftl0", 00:18:28.429 "aliases": [ 00:18:28.429 "c37facfd-dd97-400b-8ac2-624a64fa44e0" 00:18:28.429 ], 00:18:28.429 "product_name": "FTL disk", 00:18:28.429 "block_size": 4096, 00:18:28.429 "num_blocks": 20971520, 00:18:28.429 "uuid": "c37facfd-dd97-400b-8ac2-624a64fa44e0", 00:18:28.429 "assigned_rate_limits": { 00:18:28.429 "rw_ios_per_sec": 0, 00:18:28.429 "rw_mbytes_per_sec": 0, 00:18:28.429 "r_mbytes_per_sec": 0, 00:18:28.429 "w_mbytes_per_sec": 0 00:18:28.429 }, 00:18:28.429 "claimed": false, 00:18:28.429 "zoned": false, 00:18:28.429 "supported_io_types": { 00:18:28.429 "read": true, 00:18:28.429 "write": true, 00:18:28.429 "unmap": true, 00:18:28.429 "flush": true, 00:18:28.429 "reset": false, 00:18:28.429 "nvme_admin": false, 00:18:28.429 "nvme_io": false, 00:18:28.429 "nvme_io_md": false, 00:18:28.429 "write_zeroes": true, 00:18:28.429 "zcopy": false, 00:18:28.429 "get_zone_info": false, 00:18:28.429 "zone_management": false, 00:18:28.429 "zone_append": false, 00:18:28.429 "compare": false, 00:18:28.429 "compare_and_write": false, 00:18:28.429 "abort": false, 00:18:28.429 "seek_hole": false, 00:18:28.429 "seek_data": false, 00:18:28.429 "copy": false, 00:18:28.429 "nvme_iov_md": false 00:18:28.429 }, 00:18:28.429 "driver_specific": { 00:18:28.429 "ftl": { 00:18:28.429 "base_bdev": "0ed9577d-c813-41cf-adef-d3ccac769075", 00:18:28.429 "cache": "nvc0n1p0" 00:18:28.429 } 00:18:28.429 } 00:18:28.429 } 00:18:28.429 ] 00:18:28.429 21:17:39 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # return 0 00:18:28.429 21:17:39 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:28.429 21:17:39 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:28.688 21:17:40 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:28.688 21:17:40 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:28.949 [2024-07-14 21:17:40.277567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.949 [2024-07-14 21:17:40.277626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:28.949 [2024-07-14 21:17:40.277666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:28.949 [2024-07-14 21:17:40.277682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.949 [2024-07-14 21:17:40.277728] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:28.949 [2024-07-14 21:17:40.281139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.949 [2024-07-14 21:17:40.281212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:28.949 [2024-07-14 21:17:40.281228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.387 ms 00:18:28.949 [2024-07-14 21:17:40.281242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.949 [2024-07-14 21:17:40.281722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.949 [2024-07-14 21:17:40.281754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:28.949 [2024-07-14 21:17:40.281767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:18:28.949 [2024-07-14 21:17:40.281779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.949 [2024-07-14 21:17:40.285012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.949 [2024-07-14 21:17:40.285051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:28.949 [2024-07-14 21:17:40.285066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.192 ms 00:18:28.949 [2024-07-14 21:17:40.285079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.949 [2024-07-14 21:17:40.291627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.949 [2024-07-14 21:17:40.291679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:28.949 [2024-07-14 21:17:40.291694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.518 ms 00:18:28.949 [2024-07-14 21:17:40.291706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.949 [2024-07-14 21:17:40.320693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.949 [2024-07-14 21:17:40.320805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:28.949 [2024-07-14 21:17:40.320835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.859 ms 00:18:28.949 [2024-07-14 21:17:40.320849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.949 [2024-07-14 21:17:40.338666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.949 [2024-07-14 21:17:40.338748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:28.949 [2024-07-14 21:17:40.338770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.760 ms 00:18:28.949 [2024-07-14 21:17:40.338783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.949 [2024-07-14 21:17:40.339087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.949 [2024-07-14 21:17:40.339124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:28.949 [2024-07-14 21:17:40.339140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:18:28.949 [2024-07-14 21:17:40.339154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.949 [2024-07-14 21:17:40.370986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.949 [2024-07-14 21:17:40.371068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:28.949 [2024-07-14 21:17:40.371104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.797 ms 00:18:28.949 [2024-07-14 21:17:40.371118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.949 [2024-07-14 21:17:40.401196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.949 [2024-07-14 21:17:40.401269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:28.949 [2024-07-14 21:17:40.401303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.019 ms 00:18:28.949 [2024-07-14 21:17:40.401316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.949 [2024-07-14 21:17:40.431311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.949 [2024-07-14 21:17:40.431410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:28.949 [2024-07-14 21:17:40.431440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.925 ms 00:18:28.949 [2024-07-14 21:17:40.431454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.949 [2024-07-14 21:17:40.463281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.949 [2024-07-14 21:17:40.463379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:28.949 [2024-07-14 21:17:40.463396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.685 ms 00:18:28.949 [2024-07-14 21:17:40.463409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.949 [2024-07-14 21:17:40.463467] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:28.949 [2024-07-14 21:17:40.463495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.463993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.464007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.464020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.464037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.464049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.464063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.464075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.464090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.464102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.464116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.464128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:28.949 [2024-07-14 21:17:40.464141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:28.950 [2024-07-14 21:17:40.464960] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:28.950 [2024-07-14 21:17:40.464974] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c37facfd-dd97-400b-8ac2-624a64fa44e0 00:18:28.950 [2024-07-14 21:17:40.464988] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:28.950 [2024-07-14 21:17:40.464999] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:28.950 [2024-07-14 21:17:40.465017] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:28.950 [2024-07-14 21:17:40.465030] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:28.950 [2024-07-14 21:17:40.465043] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:28.950 [2024-07-14 21:17:40.465054] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:28.950 [2024-07-14 21:17:40.465067] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:28.950 [2024-07-14 21:17:40.465077] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:28.950 [2024-07-14 21:17:40.465089] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:28.950 [2024-07-14 21:17:40.465101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.950 [2024-07-14 21:17:40.465115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:28.950 [2024-07-14 21:17:40.465127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.636 ms 00:18:28.950 [2024-07-14 21:17:40.465141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.950 [2024-07-14 21:17:40.481849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.950 [2024-07-14 21:17:40.481912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:28.950 [2024-07-14 21:17:40.481963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.635 ms 00:18:28.950 [2024-07-14 21:17:40.481977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.950 [2024-07-14 21:17:40.482442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.950 [2024-07-14 21:17:40.482493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:28.950 [2024-07-14 21:17:40.482526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:18:28.950 [2024-07-14 21:17:40.482540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.210 [2024-07-14 21:17:40.539948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.210 [2024-07-14 21:17:40.540034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:29.210 [2024-07-14 21:17:40.540069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.210 [2024-07-14 21:17:40.540081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.210 [2024-07-14 21:17:40.540165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.210 [2024-07-14 21:17:40.540182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:29.210 [2024-07-14 21:17:40.540193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.210 [2024-07-14 21:17:40.540206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.210 [2024-07-14 21:17:40.540413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.210 [2024-07-14 21:17:40.540445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:29.210 [2024-07-14 21:17:40.540458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.210 [2024-07-14 21:17:40.540472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.210 [2024-07-14 21:17:40.540506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.210 [2024-07-14 21:17:40.540525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:29.210 [2024-07-14 21:17:40.540538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.210 [2024-07-14 21:17:40.540551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.210 [2024-07-14 21:17:40.639099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.210 [2024-07-14 21:17:40.639170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:29.210 [2024-07-14 21:17:40.639189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.210 [2024-07-14 21:17:40.639203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.210 [2024-07-14 21:17:40.718409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.210 [2024-07-14 21:17:40.718512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:29.210 [2024-07-14 21:17:40.718532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.210 [2024-07-14 21:17:40.718545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.210 [2024-07-14 21:17:40.718654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.210 [2024-07-14 21:17:40.718677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:29.210 [2024-07-14 21:17:40.718692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.210 [2024-07-14 21:17:40.718705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.210 [2024-07-14 21:17:40.718796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.210 [2024-07-14 21:17:40.718835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:29.210 [2024-07-14 21:17:40.718873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.210 [2024-07-14 21:17:40.718890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.210 [2024-07-14 21:17:40.719028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.210 [2024-07-14 21:17:40.719068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:29.210 [2024-07-14 21:17:40.719085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.210 [2024-07-14 21:17:40.719099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.210 [2024-07-14 21:17:40.719169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.210 [2024-07-14 21:17:40.719192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:29.210 [2024-07-14 21:17:40.719205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.210 [2024-07-14 21:17:40.719218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.210 [2024-07-14 21:17:40.719276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.210 [2024-07-14 21:17:40.719295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:29.210 [2024-07-14 21:17:40.719307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.210 [2024-07-14 21:17:40.719323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.210 [2024-07-14 21:17:40.719382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.210 [2024-07-14 21:17:40.719425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:29.210 [2024-07-14 21:17:40.719439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.210 [2024-07-14 21:17:40.719452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.210 [2024-07-14 21:17:40.719637] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 442.049 ms, result 0 00:18:29.210 true 00:18:29.210 21:17:40 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 78760 00:18:29.210 21:17:40 ftl.ftl_fio_basic -- common/autotest_common.sh@948 -- # '[' -z 78760 ']' 00:18:29.210 21:17:40 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # kill -0 78760 00:18:29.210 21:17:40 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # uname 00:18:29.210 21:17:40 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:29.210 21:17:40 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78760 00:18:29.469 killing process with pid 78760 00:18:29.469 21:17:40 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:29.469 21:17:40 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:29.469 21:17:40 ftl.ftl_fio_basic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78760' 00:18:29.469 21:17:40 ftl.ftl_fio_basic -- common/autotest_common.sh@967 -- # kill 78760 00:18:29.469 21:17:40 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # wait 78760 00:18:33.650 21:17:45 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:33.650 21:17:45 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:33.650 21:17:45 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:33.650 21:17:45 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:33.650 21:17:45 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:33.650 21:17:45 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:33.650 21:17:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:33.650 21:17:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:33.650 21:17:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:33.650 21:17:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:33.650 21:17:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:33.650 21:17:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:18:33.650 21:17:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:33.650 21:17:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:33.650 21:17:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:33.650 21:17:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:18:33.650 21:17:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:33.650 21:17:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:33.650 21:17:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:33.650 21:17:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:18:33.650 21:17:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:33.650 21:17:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:33.909 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:33.909 fio-3.35 00:18:33.909 Starting 1 thread 00:18:39.176 00:18:39.176 test: (groupid=0, jobs=1): err= 0: pid=78963: Sun Jul 14 21:17:50 2024 00:18:39.176 read: IOPS=913, BW=60.6MiB/s (63.6MB/s)(255MiB/4198msec) 00:18:39.176 slat (nsec): min=5436, max=46508, avg=7606.71, stdev=3579.20 00:18:39.176 clat (usec): min=322, max=1418, avg=486.15, stdev=52.93 00:18:39.176 lat (usec): min=328, max=1424, avg=493.76, stdev=53.82 00:18:39.176 clat percentiles (usec): 00:18:39.176 | 1.00th=[ 375], 5.00th=[ 420], 10.00th=[ 433], 20.00th=[ 449], 00:18:39.176 | 30.00th=[ 461], 40.00th=[ 469], 50.00th=[ 482], 60.00th=[ 490], 00:18:39.176 | 70.00th=[ 502], 80.00th=[ 519], 90.00th=[ 553], 95.00th=[ 578], 00:18:39.176 | 99.00th=[ 644], 99.50th=[ 660], 99.90th=[ 717], 99.95th=[ 824], 00:18:39.176 | 99.99th=[ 1418] 00:18:39.176 write: IOPS=919, BW=61.1MiB/s (64.0MB/s)(256MiB/4193msec); 0 zone resets 00:18:39.176 slat (nsec): min=18830, max=89641, avg=25047.16, stdev=6994.75 00:18:39.176 clat (usec): min=374, max=1748, avg=558.47, stdev=70.05 00:18:39.176 lat (usec): min=395, max=1776, avg=583.52, stdev=70.83 00:18:39.176 clat percentiles (usec): 00:18:39.176 | 1.00th=[ 441], 5.00th=[ 465], 10.00th=[ 482], 20.00th=[ 510], 00:18:39.176 | 30.00th=[ 529], 40.00th=[ 537], 50.00th=[ 553], 60.00th=[ 562], 00:18:39.176 | 70.00th=[ 578], 80.00th=[ 603], 90.00th=[ 627], 95.00th=[ 660], 00:18:39.176 | 99.00th=[ 824], 99.50th=[ 898], 99.90th=[ 988], 99.95th=[ 1090], 00:18:39.176 | 99.99th=[ 1745] 00:18:39.176 bw ( KiB/s): min=58208, max=64192, per=100.00%, avg=62560.00, stdev=1947.90, samples=8 00:18:39.177 iops : min= 856, max= 944, avg=920.00, stdev=28.65, samples=8 00:18:39.177 lat (usec) : 500=42.53%, 750=56.54%, 1000=0.88% 00:18:39.177 lat (msec) : 2=0.05% 00:18:39.177 cpu : usr=99.14%, sys=0.14%, ctx=7, majf=0, minf=1171 00:18:39.177 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:39.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.177 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.177 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:39.177 00:18:39.177 Run status group 0 (all jobs): 00:18:39.177 READ: bw=60.6MiB/s (63.6MB/s), 60.6MiB/s-60.6MiB/s (63.6MB/s-63.6MB/s), io=255MiB (267MB), run=4198-4198msec 00:18:39.177 WRITE: bw=61.1MiB/s (64.0MB/s), 61.1MiB/s-61.1MiB/s (64.0MB/s-64.0MB/s), io=256MiB (269MB), run=4193-4193msec 00:18:41.081 ----------------------------------------------------- 00:18:41.081 Suppressions used: 00:18:41.081 count bytes template 00:18:41.081 1 5 /usr/src/fio/parse.c 00:18:41.081 1 8 libtcmalloc_minimal.so 00:18:41.081 1 904 libcrypto.so 00:18:41.081 ----------------------------------------------------- 00:18:41.081 00:18:41.081 21:17:52 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:41.081 21:17:52 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:41.081 21:17:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:41.081 21:17:52 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:41.081 21:17:52 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:41.081 21:17:52 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:41.081 21:17:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:41.081 21:17:52 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:41.081 21:17:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:41.081 21:17:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:41.081 21:17:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:41.081 21:17:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:41.081 21:17:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:41.081 21:17:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:18:41.081 21:17:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:41.081 21:17:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:41.081 21:17:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:41.081 21:17:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:18:41.081 21:17:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:41.081 21:17:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:41.081 21:17:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:41.081 21:17:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:18:41.081 21:17:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:41.081 21:17:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:41.081 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:41.081 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:41.081 fio-3.35 00:18:41.081 Starting 2 threads 00:19:13.172 00:19:13.172 first_half: (groupid=0, jobs=1): err= 0: pid=79066: Sun Jul 14 21:18:22 2024 00:19:13.172 read: IOPS=2250, BW=9004KiB/s (9220kB/s)(255MiB/28983msec) 00:19:13.172 slat (usec): min=4, max=210, avg= 7.48, stdev= 2.18 00:19:13.172 clat (usec): min=939, max=319589, avg=42638.13, stdev=20436.87 00:19:13.172 lat (usec): min=946, max=319597, avg=42645.61, stdev=20437.04 00:19:13.172 clat percentiles (msec): 00:19:13.172 | 1.00th=[ 8], 5.00th=[ 38], 10.00th=[ 38], 20.00th=[ 39], 00:19:13.172 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 40], 60.00th=[ 40], 00:19:13.172 | 70.00th=[ 41], 80.00th=[ 42], 90.00th=[ 46], 95.00th=[ 53], 00:19:13.172 | 99.00th=[ 165], 99.50th=[ 186], 99.90th=[ 259], 99.95th=[ 292], 00:19:13.172 | 99.99th=[ 309] 00:19:13.172 write: IOPS=2693, BW=10.5MiB/s (11.0MB/s)(256MiB/24328msec); 0 zone resets 00:19:13.172 slat (usec): min=5, max=241, avg= 9.53, stdev= 5.34 00:19:13.172 clat (usec): min=464, max=105563, avg=14131.48, stdev=24260.15 00:19:13.172 lat (usec): min=482, max=105570, avg=14141.01, stdev=24260.41 00:19:13.172 clat percentiles (usec): 00:19:13.172 | 1.00th=[ 938], 5.00th=[ 1237], 10.00th=[ 1434], 20.00th=[ 1795], 00:19:13.172 | 30.00th=[ 3097], 40.00th=[ 4948], 50.00th=[ 6063], 60.00th=[ 7046], 00:19:13.172 | 70.00th=[ 8094], 80.00th=[ 13173], 90.00th=[ 40109], 95.00th=[ 86508], 00:19:13.172 | 99.00th=[ 94897], 99.50th=[ 98042], 99.90th=[103285], 99.95th=[104334], 00:19:13.172 | 99.99th=[104334] 00:19:13.172 bw ( KiB/s): min= 960, max=40792, per=97.31%, avg=20971.52, stdev=12924.44, samples=25 00:19:13.172 iops : min= 240, max=10198, avg=5242.88, stdev=3231.11, samples=25 00:19:13.172 lat (usec) : 500=0.01%, 750=0.10%, 1000=0.67% 00:19:13.172 lat (msec) : 2=11.34%, 4=5.83%, 10=20.48%, 20=6.43%, 50=47.53% 00:19:13.172 lat (msec) : 100=6.33%, 250=1.24%, 500=0.06% 00:19:13.172 cpu : usr=99.10%, sys=0.21%, ctx=48, majf=0, minf=5565 00:19:13.172 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:13.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.172 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:13.172 issued rwts: total=65240,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.172 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:13.172 second_half: (groupid=0, jobs=1): err= 0: pid=79067: Sun Jul 14 21:18:22 2024 00:19:13.172 read: IOPS=2241, BW=8966KiB/s (9181kB/s)(254MiB/29065msec) 00:19:13.172 slat (nsec): min=4551, max=77886, avg=7481.83, stdev=1955.63 00:19:13.172 clat (usec): min=807, max=325343, avg=42468.85, stdev=21233.09 00:19:13.172 lat (usec): min=815, max=325351, avg=42476.33, stdev=21233.26 00:19:13.172 clat percentiles (msec): 00:19:13.172 | 1.00th=[ 9], 5.00th=[ 37], 10.00th=[ 38], 20.00th=[ 39], 00:19:13.172 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 40], 60.00th=[ 40], 00:19:13.172 | 70.00th=[ 41], 80.00th=[ 41], 90.00th=[ 45], 95.00th=[ 52], 00:19:13.172 | 99.00th=[ 165], 99.50th=[ 188], 99.90th=[ 226], 99.95th=[ 243], 00:19:13.172 | 99.99th=[ 317] 00:19:13.172 write: IOPS=3333, BW=13.0MiB/s (13.7MB/s)(256MiB/19662msec); 0 zone resets 00:19:13.172 slat (usec): min=5, max=823, avg= 9.68, stdev= 6.37 00:19:13.172 clat (usec): min=522, max=105363, avg=14513.54, stdev=24937.21 00:19:13.172 lat (usec): min=531, max=105389, avg=14523.21, stdev=24937.38 00:19:13.173 clat percentiles (usec): 00:19:13.173 | 1.00th=[ 988], 5.00th=[ 1254], 10.00th=[ 1434], 20.00th=[ 1663], 00:19:13.173 | 30.00th=[ 1958], 40.00th=[ 2835], 50.00th=[ 4686], 60.00th=[ 6652], 00:19:13.173 | 70.00th=[ 8979], 80.00th=[ 15008], 90.00th=[ 52167], 95.00th=[ 86508], 00:19:13.173 | 99.00th=[ 94897], 99.50th=[ 96994], 99.90th=[103285], 99.95th=[104334], 00:19:13.173 | 99.99th=[105382] 00:19:13.173 bw ( KiB/s): min= 2576, max=49984, per=100.00%, avg=22797.61, stdev=10634.82, samples=23 00:19:13.173 iops : min= 644, max=12496, avg=5699.39, stdev=2658.70, samples=23 00:19:13.173 lat (usec) : 750=0.02%, 1000=0.56% 00:19:13.173 lat (msec) : 2=15.10%, 4=7.83%, 10=12.92%, 20=7.73%, 50=48.10% 00:19:13.173 lat (msec) : 100=6.19%, 250=1.53%, 500=0.02% 00:19:13.173 cpu : usr=98.96%, sys=0.33%, ctx=51, majf=0, minf=5552 00:19:13.173 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:13.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.173 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:13.173 issued rwts: total=65149,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.173 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:13.173 00:19:13.173 Run status group 0 (all jobs): 00:19:13.173 READ: bw=17.5MiB/s (18.4MB/s), 8966KiB/s-9004KiB/s (9181kB/s-9220kB/s), io=509MiB (534MB), run=28983-29065msec 00:19:13.173 WRITE: bw=21.0MiB/s (22.1MB/s), 10.5MiB/s-13.0MiB/s (11.0MB/s-13.7MB/s), io=512MiB (537MB), run=19662-24328msec 00:19:13.173 ----------------------------------------------------- 00:19:13.173 Suppressions used: 00:19:13.173 count bytes template 00:19:13.173 2 10 /usr/src/fio/parse.c 00:19:13.173 3 288 /usr/src/fio/iolog.c 00:19:13.173 1 8 libtcmalloc_minimal.so 00:19:13.173 1 904 libcrypto.so 00:19:13.173 ----------------------------------------------------- 00:19:13.173 00:19:13.173 21:18:24 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:19:13.173 21:18:24 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:13.173 21:18:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:13.173 21:18:24 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:13.173 21:18:24 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:19:13.173 21:18:24 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:13.173 21:18:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:13.173 21:18:24 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:13.173 21:18:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:13.173 21:18:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:13.173 21:18:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:13.173 21:18:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:13.173 21:18:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:13.173 21:18:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:19:13.173 21:18:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:13.173 21:18:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:13.173 21:18:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:19:13.173 21:18:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:13.173 21:18:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:13.173 21:18:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:13.173 21:18:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:13.173 21:18:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:19:13.173 21:18:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:13.173 21:18:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:13.431 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:13.431 fio-3.35 00:19:13.431 Starting 1 thread 00:19:31.569 00:19:31.569 test: (groupid=0, jobs=1): err= 0: pid=79419: Sun Jul 14 21:18:41 2024 00:19:31.569 read: IOPS=6562, BW=25.6MiB/s (26.9MB/s)(255MiB/9936msec) 00:19:31.569 slat (nsec): min=4720, max=40530, avg=6777.29, stdev=1785.10 00:19:31.569 clat (usec): min=772, max=38043, avg=19494.58, stdev=1025.07 00:19:31.569 lat (usec): min=778, max=38050, avg=19501.36, stdev=1025.04 00:19:31.569 clat percentiles (usec): 00:19:31.569 | 1.00th=[18482], 5.00th=[18744], 10.00th=[19006], 20.00th=[19006], 00:19:31.569 | 30.00th=[19268], 40.00th=[19268], 50.00th=[19268], 60.00th=[19530], 00:19:31.569 | 70.00th=[19530], 80.00th=[19792], 90.00th=[20055], 95.00th=[20841], 00:19:31.569 | 99.00th=[22676], 99.50th=[24249], 99.90th=[28443], 99.95th=[33162], 00:19:31.569 | 99.99th=[37487] 00:19:31.569 write: IOPS=11.1k, BW=43.2MiB/s (45.3MB/s)(256MiB/5923msec); 0 zone resets 00:19:31.569 slat (usec): min=6, max=402, avg=10.02, stdev= 5.85 00:19:31.569 clat (usec): min=665, max=68849, avg=11510.44, stdev=14372.85 00:19:31.569 lat (usec): min=673, max=68859, avg=11520.46, stdev=14372.88 00:19:31.569 clat percentiles (usec): 00:19:31.569 | 1.00th=[ 996], 5.00th=[ 1205], 10.00th=[ 1352], 20.00th=[ 1565], 00:19:31.569 | 30.00th=[ 1795], 40.00th=[ 2278], 50.00th=[ 7963], 60.00th=[ 8979], 00:19:31.569 | 70.00th=[10028], 80.00th=[11338], 90.00th=[42730], 95.00th=[45351], 00:19:31.569 | 99.00th=[49546], 99.50th=[52167], 99.90th=[55313], 99.95th=[57934], 00:19:31.569 | 99.99th=[66847] 00:19:31.569 bw ( KiB/s): min=30856, max=57888, per=98.72%, avg=43690.67, stdev=8615.80, samples=12 00:19:31.569 iops : min= 7712, max=14472, avg=10922.50, stdev=2154.22, samples=12 00:19:31.569 lat (usec) : 750=0.01%, 1000=0.52% 00:19:31.569 lat (msec) : 2=17.67%, 4=2.74%, 10=14.14%, 20=52.39%, 50=12.09% 00:19:31.569 lat (msec) : 100=0.44% 00:19:31.569 cpu : usr=98.81%, sys=0.41%, ctx=20, majf=0, minf=5568 00:19:31.569 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:31.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.569 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:31.569 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.569 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:31.569 00:19:31.569 Run status group 0 (all jobs): 00:19:31.569 READ: bw=25.6MiB/s (26.9MB/s), 25.6MiB/s-25.6MiB/s (26.9MB/s-26.9MB/s), io=255MiB (267MB), run=9936-9936msec 00:19:31.569 WRITE: bw=43.2MiB/s (45.3MB/s), 43.2MiB/s-43.2MiB/s (45.3MB/s-45.3MB/s), io=256MiB (268MB), run=5923-5923msec 00:19:32.530 ----------------------------------------------------- 00:19:32.530 Suppressions used: 00:19:32.530 count bytes template 00:19:32.530 1 5 /usr/src/fio/parse.c 00:19:32.530 2 192 /usr/src/fio/iolog.c 00:19:32.530 1 8 libtcmalloc_minimal.so 00:19:32.530 1 904 libcrypto.so 00:19:32.530 ----------------------------------------------------- 00:19:32.530 00:19:32.530 21:18:43 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:32.530 21:18:43 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:32.530 21:18:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:32.530 21:18:43 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:32.530 21:18:43 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:32.530 21:18:43 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:32.530 Remove shared memory files 00:19:32.530 21:18:43 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:32.530 21:18:43 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:32.530 21:18:43 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid61946 /dev/shm/spdk_tgt_trace.pid77708 00:19:32.530 21:18:43 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:32.530 21:18:43 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:32.530 00:19:32.530 real 1m12.095s 00:19:32.530 user 2m40.431s 00:19:32.530 sys 0m3.542s 00:19:32.530 21:18:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:32.530 ************************************ 00:19:32.530 21:18:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:32.530 END TEST ftl_fio_basic 00:19:32.530 ************************************ 00:19:32.530 21:18:43 ftl -- common/autotest_common.sh@1142 -- # return 0 00:19:32.530 21:18:43 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:32.530 21:18:43 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:32.530 21:18:43 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:32.530 21:18:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:32.530 ************************************ 00:19:32.530 START TEST ftl_bdevperf 00:19:32.530 ************************************ 00:19:32.530 21:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:32.530 * Looking for test storage... 00:19:32.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:32.530 21:18:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:32.530 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=79673 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 79673 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 79673 ']' 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:32.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:32.531 21:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:32.790 [2024-07-14 21:18:44.115738] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:32.790 [2024-07-14 21:18:44.116039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79673 ] 00:19:32.790 [2024-07-14 21:18:44.290444] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.048 [2024-07-14 21:18:44.528351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.616 21:18:45 ftl.ftl_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:33.616 21:18:45 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:19:33.616 21:18:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:33.616 21:18:45 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:33.616 21:18:45 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:33.617 21:18:45 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:33.617 21:18:45 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:33.617 21:18:45 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:34.184 21:18:45 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:34.184 21:18:45 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:34.184 21:18:45 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:34.184 21:18:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:19:34.184 21:18:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:34.184 21:18:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:19:34.184 21:18:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:19:34.184 21:18:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:34.443 21:18:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:34.443 { 00:19:34.443 "name": "nvme0n1", 00:19:34.443 "aliases": [ 00:19:34.443 "6588c8c3-52db-49f3-9cae-6d41e94368b2" 00:19:34.443 ], 00:19:34.443 "product_name": "NVMe disk", 00:19:34.443 "block_size": 4096, 00:19:34.443 "num_blocks": 1310720, 00:19:34.443 "uuid": "6588c8c3-52db-49f3-9cae-6d41e94368b2", 00:19:34.443 "assigned_rate_limits": { 00:19:34.443 "rw_ios_per_sec": 0, 00:19:34.443 "rw_mbytes_per_sec": 0, 00:19:34.443 "r_mbytes_per_sec": 0, 00:19:34.443 "w_mbytes_per_sec": 0 00:19:34.443 }, 00:19:34.443 "claimed": true, 00:19:34.443 "claim_type": "read_many_write_one", 00:19:34.443 "zoned": false, 00:19:34.443 "supported_io_types": { 00:19:34.443 "read": true, 00:19:34.443 "write": true, 00:19:34.443 "unmap": true, 00:19:34.443 "flush": true, 00:19:34.443 "reset": true, 00:19:34.443 "nvme_admin": true, 00:19:34.443 "nvme_io": true, 00:19:34.443 "nvme_io_md": false, 00:19:34.443 "write_zeroes": true, 00:19:34.443 "zcopy": false, 00:19:34.443 "get_zone_info": false, 00:19:34.443 "zone_management": false, 00:19:34.443 "zone_append": false, 00:19:34.443 "compare": true, 00:19:34.443 "compare_and_write": false, 00:19:34.443 "abort": true, 00:19:34.443 "seek_hole": false, 00:19:34.443 "seek_data": false, 00:19:34.443 "copy": true, 00:19:34.443 "nvme_iov_md": false 00:19:34.443 }, 00:19:34.443 "driver_specific": { 00:19:34.443 "nvme": [ 00:19:34.443 { 00:19:34.443 "pci_address": "0000:00:11.0", 00:19:34.443 "trid": { 00:19:34.443 "trtype": "PCIe", 00:19:34.443 "traddr": "0000:00:11.0" 00:19:34.443 }, 00:19:34.443 "ctrlr_data": { 00:19:34.443 "cntlid": 0, 00:19:34.443 "vendor_id": "0x1b36", 00:19:34.443 "model_number": "QEMU NVMe Ctrl", 00:19:34.443 "serial_number": "12341", 00:19:34.443 "firmware_revision": "8.0.0", 00:19:34.443 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:34.443 "oacs": { 00:19:34.443 "security": 0, 00:19:34.443 "format": 1, 00:19:34.443 "firmware": 0, 00:19:34.443 "ns_manage": 1 00:19:34.443 }, 00:19:34.443 "multi_ctrlr": false, 00:19:34.443 "ana_reporting": false 00:19:34.443 }, 00:19:34.443 "vs": { 00:19:34.443 "nvme_version": "1.4" 00:19:34.443 }, 00:19:34.443 "ns_data": { 00:19:34.443 "id": 1, 00:19:34.443 "can_share": false 00:19:34.443 } 00:19:34.443 } 00:19:34.443 ], 00:19:34.443 "mp_policy": "active_passive" 00:19:34.443 } 00:19:34.443 } 00:19:34.443 ]' 00:19:34.443 21:18:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:34.443 21:18:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:19:34.443 21:18:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:34.443 21:18:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:19:34.443 21:18:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:19:34.443 21:18:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:19:34.443 21:18:45 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:34.443 21:18:45 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:34.443 21:18:45 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:34.443 21:18:45 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:34.443 21:18:45 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:34.702 21:18:46 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=b088627f-a024-43f0-b9b0-88441ff09939 00:19:34.702 21:18:46 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:34.702 21:18:46 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b088627f-a024-43f0-b9b0-88441ff09939 00:19:34.961 21:18:46 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:35.219 21:18:46 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=9e9a25d0-2b25-4114-a981-441d4f9fac27 00:19:35.219 21:18:46 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9e9a25d0-2b25-4114-a981-441d4f9fac27 00:19:35.478 21:18:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=5639d14d-01f1-43d0-ad72-74ba9e598c25 00:19:35.478 21:18:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5639d14d-01f1-43d0-ad72-74ba9e598c25 00:19:35.478 21:18:46 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:35.478 21:18:46 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:35.478 21:18:46 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=5639d14d-01f1-43d0-ad72-74ba9e598c25 00:19:35.478 21:18:46 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:35.478 21:18:46 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 5639d14d-01f1-43d0-ad72-74ba9e598c25 00:19:35.478 21:18:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=5639d14d-01f1-43d0-ad72-74ba9e598c25 00:19:35.478 21:18:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:35.478 21:18:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:19:35.478 21:18:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:19:35.478 21:18:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5639d14d-01f1-43d0-ad72-74ba9e598c25 00:19:35.736 21:18:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:35.736 { 00:19:35.736 "name": "5639d14d-01f1-43d0-ad72-74ba9e598c25", 00:19:35.736 "aliases": [ 00:19:35.736 "lvs/nvme0n1p0" 00:19:35.736 ], 00:19:35.736 "product_name": "Logical Volume", 00:19:35.736 "block_size": 4096, 00:19:35.736 "num_blocks": 26476544, 00:19:35.736 "uuid": "5639d14d-01f1-43d0-ad72-74ba9e598c25", 00:19:35.736 "assigned_rate_limits": { 00:19:35.736 "rw_ios_per_sec": 0, 00:19:35.736 "rw_mbytes_per_sec": 0, 00:19:35.736 "r_mbytes_per_sec": 0, 00:19:35.736 "w_mbytes_per_sec": 0 00:19:35.736 }, 00:19:35.736 "claimed": false, 00:19:35.736 "zoned": false, 00:19:35.736 "supported_io_types": { 00:19:35.736 "read": true, 00:19:35.736 "write": true, 00:19:35.736 "unmap": true, 00:19:35.736 "flush": false, 00:19:35.736 "reset": true, 00:19:35.736 "nvme_admin": false, 00:19:35.736 "nvme_io": false, 00:19:35.736 "nvme_io_md": false, 00:19:35.736 "write_zeroes": true, 00:19:35.736 "zcopy": false, 00:19:35.736 "get_zone_info": false, 00:19:35.736 "zone_management": false, 00:19:35.736 "zone_append": false, 00:19:35.736 "compare": false, 00:19:35.736 "compare_and_write": false, 00:19:35.736 "abort": false, 00:19:35.736 "seek_hole": true, 00:19:35.736 "seek_data": true, 00:19:35.736 "copy": false, 00:19:35.736 "nvme_iov_md": false 00:19:35.736 }, 00:19:35.736 "driver_specific": { 00:19:35.736 "lvol": { 00:19:35.736 "lvol_store_uuid": "9e9a25d0-2b25-4114-a981-441d4f9fac27", 00:19:35.736 "base_bdev": "nvme0n1", 00:19:35.736 "thin_provision": true, 00:19:35.736 "num_allocated_clusters": 0, 00:19:35.736 "snapshot": false, 00:19:35.736 "clone": false, 00:19:35.736 "esnap_clone": false 00:19:35.736 } 00:19:35.736 } 00:19:35.736 } 00:19:35.736 ]' 00:19:35.737 21:18:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:35.995 21:18:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:19:35.995 21:18:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:35.995 21:18:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:35.995 21:18:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:35.995 21:18:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:19:35.995 21:18:47 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:35.995 21:18:47 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:35.995 21:18:47 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:36.254 21:18:47 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:36.254 21:18:47 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:36.254 21:18:47 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 5639d14d-01f1-43d0-ad72-74ba9e598c25 00:19:36.254 21:18:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=5639d14d-01f1-43d0-ad72-74ba9e598c25 00:19:36.254 21:18:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:36.254 21:18:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:19:36.254 21:18:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:19:36.254 21:18:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5639d14d-01f1-43d0-ad72-74ba9e598c25 00:19:36.512 21:18:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:36.512 { 00:19:36.512 "name": "5639d14d-01f1-43d0-ad72-74ba9e598c25", 00:19:36.512 "aliases": [ 00:19:36.512 "lvs/nvme0n1p0" 00:19:36.512 ], 00:19:36.512 "product_name": "Logical Volume", 00:19:36.512 "block_size": 4096, 00:19:36.512 "num_blocks": 26476544, 00:19:36.512 "uuid": "5639d14d-01f1-43d0-ad72-74ba9e598c25", 00:19:36.512 "assigned_rate_limits": { 00:19:36.512 "rw_ios_per_sec": 0, 00:19:36.512 "rw_mbytes_per_sec": 0, 00:19:36.512 "r_mbytes_per_sec": 0, 00:19:36.512 "w_mbytes_per_sec": 0 00:19:36.512 }, 00:19:36.512 "claimed": false, 00:19:36.512 "zoned": false, 00:19:36.512 "supported_io_types": { 00:19:36.512 "read": true, 00:19:36.512 "write": true, 00:19:36.512 "unmap": true, 00:19:36.512 "flush": false, 00:19:36.512 "reset": true, 00:19:36.512 "nvme_admin": false, 00:19:36.512 "nvme_io": false, 00:19:36.512 "nvme_io_md": false, 00:19:36.512 "write_zeroes": true, 00:19:36.512 "zcopy": false, 00:19:36.512 "get_zone_info": false, 00:19:36.512 "zone_management": false, 00:19:36.512 "zone_append": false, 00:19:36.512 "compare": false, 00:19:36.512 "compare_and_write": false, 00:19:36.512 "abort": false, 00:19:36.512 "seek_hole": true, 00:19:36.512 "seek_data": true, 00:19:36.512 "copy": false, 00:19:36.512 "nvme_iov_md": false 00:19:36.512 }, 00:19:36.512 "driver_specific": { 00:19:36.512 "lvol": { 00:19:36.512 "lvol_store_uuid": "9e9a25d0-2b25-4114-a981-441d4f9fac27", 00:19:36.512 "base_bdev": "nvme0n1", 00:19:36.512 "thin_provision": true, 00:19:36.512 "num_allocated_clusters": 0, 00:19:36.512 "snapshot": false, 00:19:36.512 "clone": false, 00:19:36.512 "esnap_clone": false 00:19:36.512 } 00:19:36.512 } 00:19:36.512 } 00:19:36.512 ]' 00:19:36.512 21:18:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:36.512 21:18:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:19:36.512 21:18:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:36.512 21:18:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:36.512 21:18:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:36.512 21:18:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:19:36.512 21:18:48 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:36.512 21:18:48 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:36.770 21:18:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:19:36.770 21:18:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size 5639d14d-01f1-43d0-ad72-74ba9e598c25 00:19:36.770 21:18:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=5639d14d-01f1-43d0-ad72-74ba9e598c25 00:19:36.770 21:18:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:36.770 21:18:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:19:36.770 21:18:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:19:36.770 21:18:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5639d14d-01f1-43d0-ad72-74ba9e598c25 00:19:37.029 21:18:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:37.029 { 00:19:37.029 "name": "5639d14d-01f1-43d0-ad72-74ba9e598c25", 00:19:37.029 "aliases": [ 00:19:37.029 "lvs/nvme0n1p0" 00:19:37.029 ], 00:19:37.029 "product_name": "Logical Volume", 00:19:37.029 "block_size": 4096, 00:19:37.029 "num_blocks": 26476544, 00:19:37.029 "uuid": "5639d14d-01f1-43d0-ad72-74ba9e598c25", 00:19:37.029 "assigned_rate_limits": { 00:19:37.030 "rw_ios_per_sec": 0, 00:19:37.030 "rw_mbytes_per_sec": 0, 00:19:37.030 "r_mbytes_per_sec": 0, 00:19:37.030 "w_mbytes_per_sec": 0 00:19:37.030 }, 00:19:37.030 "claimed": false, 00:19:37.030 "zoned": false, 00:19:37.030 "supported_io_types": { 00:19:37.030 "read": true, 00:19:37.030 "write": true, 00:19:37.030 "unmap": true, 00:19:37.030 "flush": false, 00:19:37.030 "reset": true, 00:19:37.030 "nvme_admin": false, 00:19:37.030 "nvme_io": false, 00:19:37.030 "nvme_io_md": false, 00:19:37.030 "write_zeroes": true, 00:19:37.030 "zcopy": false, 00:19:37.030 "get_zone_info": false, 00:19:37.030 "zone_management": false, 00:19:37.030 "zone_append": false, 00:19:37.030 "compare": false, 00:19:37.030 "compare_and_write": false, 00:19:37.030 "abort": false, 00:19:37.030 "seek_hole": true, 00:19:37.030 "seek_data": true, 00:19:37.030 "copy": false, 00:19:37.030 "nvme_iov_md": false 00:19:37.030 }, 00:19:37.030 "driver_specific": { 00:19:37.030 "lvol": { 00:19:37.030 "lvol_store_uuid": "9e9a25d0-2b25-4114-a981-441d4f9fac27", 00:19:37.030 "base_bdev": "nvme0n1", 00:19:37.030 "thin_provision": true, 00:19:37.030 "num_allocated_clusters": 0, 00:19:37.030 "snapshot": false, 00:19:37.030 "clone": false, 00:19:37.030 "esnap_clone": false 00:19:37.030 } 00:19:37.030 } 00:19:37.030 } 00:19:37.030 ]' 00:19:37.030 21:18:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:37.030 21:18:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:19:37.030 21:18:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:37.288 21:18:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:37.288 21:18:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:37.288 21:18:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:19:37.288 21:18:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:19:37.288 21:18:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5639d14d-01f1-43d0-ad72-74ba9e598c25 -c nvc0n1p0 --l2p_dram_limit 20 00:19:37.547 [2024-07-14 21:18:48.854336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.547 [2024-07-14 21:18:48.854403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:37.547 [2024-07-14 21:18:48.854429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:37.547 [2024-07-14 21:18:48.854442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.547 [2024-07-14 21:18:48.854521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.547 [2024-07-14 21:18:48.854540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:37.547 [2024-07-14 21:18:48.854555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:37.547 [2024-07-14 21:18:48.854570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.547 [2024-07-14 21:18:48.854599] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:37.547 [2024-07-14 21:18:48.855576] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:37.547 [2024-07-14 21:18:48.855619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.547 [2024-07-14 21:18:48.855635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:37.547 [2024-07-14 21:18:48.855651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.024 ms 00:19:37.547 [2024-07-14 21:18:48.855662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.547 [2024-07-14 21:18:48.855789] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e392f773-62cb-4a20-85fe-920d26f85288 00:19:37.547 [2024-07-14 21:18:48.856847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.547 [2024-07-14 21:18:48.856891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:37.547 [2024-07-14 21:18:48.856908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:37.547 [2024-07-14 21:18:48.856925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.547 [2024-07-14 21:18:48.861540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.547 [2024-07-14 21:18:48.861590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:37.547 [2024-07-14 21:18:48.861633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.550 ms 00:19:37.547 [2024-07-14 21:18:48.861648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.547 [2024-07-14 21:18:48.861765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.547 [2024-07-14 21:18:48.861807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:37.547 [2024-07-14 21:18:48.861825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:19:37.547 [2024-07-14 21:18:48.861859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.547 [2024-07-14 21:18:48.861930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.547 [2024-07-14 21:18:48.861952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:37.547 [2024-07-14 21:18:48.861965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:37.547 [2024-07-14 21:18:48.861979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.547 [2024-07-14 21:18:48.862008] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:37.547 [2024-07-14 21:18:48.866602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.547 [2024-07-14 21:18:48.866639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:37.547 [2024-07-14 21:18:48.866675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.598 ms 00:19:37.547 [2024-07-14 21:18:48.866688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.547 [2024-07-14 21:18:48.866732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.547 [2024-07-14 21:18:48.866750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:37.547 [2024-07-14 21:18:48.866765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:37.547 [2024-07-14 21:18:48.866777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.547 [2024-07-14 21:18:48.866849] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:37.547 [2024-07-14 21:18:48.867011] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:37.547 [2024-07-14 21:18:48.867047] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:37.547 [2024-07-14 21:18:48.867064] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:37.547 [2024-07-14 21:18:48.867082] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:37.547 [2024-07-14 21:18:48.867096] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:37.547 [2024-07-14 21:18:48.867110] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:37.547 [2024-07-14 21:18:48.867122] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:37.547 [2024-07-14 21:18:48.867137] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:37.547 [2024-07-14 21:18:48.867149] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:37.547 [2024-07-14 21:18:48.867163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.547 [2024-07-14 21:18:48.867175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:37.547 [2024-07-14 21:18:48.867189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:19:37.547 [2024-07-14 21:18:48.867203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.547 [2024-07-14 21:18:48.867307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.547 [2024-07-14 21:18:48.867332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:37.547 [2024-07-14 21:18:48.867347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:37.547 [2024-07-14 21:18:48.867359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.547 [2024-07-14 21:18:48.867464] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:37.547 [2024-07-14 21:18:48.867479] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:37.547 [2024-07-14 21:18:48.867494] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:37.547 [2024-07-14 21:18:48.867506] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.547 [2024-07-14 21:18:48.867523] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:37.547 [2024-07-14 21:18:48.867534] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:37.547 [2024-07-14 21:18:48.867548] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:37.548 [2024-07-14 21:18:48.867558] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:37.548 [2024-07-14 21:18:48.867571] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:37.548 [2024-07-14 21:18:48.867582] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:37.548 [2024-07-14 21:18:48.867595] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:37.548 [2024-07-14 21:18:48.867606] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:37.548 [2024-07-14 21:18:48.867618] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:37.548 [2024-07-14 21:18:48.867629] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:37.548 [2024-07-14 21:18:48.867643] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:37.548 [2024-07-14 21:18:48.867654] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.548 [2024-07-14 21:18:48.867669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:37.548 [2024-07-14 21:18:48.867680] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:37.548 [2024-07-14 21:18:48.867706] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.548 [2024-07-14 21:18:48.867717] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:37.548 [2024-07-14 21:18:48.867731] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:37.548 [2024-07-14 21:18:48.867741] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:37.548 [2024-07-14 21:18:48.867756] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:37.548 [2024-07-14 21:18:48.867767] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:37.548 [2024-07-14 21:18:48.867779] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:37.548 [2024-07-14 21:18:48.867790] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:37.548 [2024-07-14 21:18:48.867819] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:37.548 [2024-07-14 21:18:48.867831] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:37.548 [2024-07-14 21:18:48.867844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:37.548 [2024-07-14 21:18:48.867854] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:37.548 [2024-07-14 21:18:48.867867] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:37.548 [2024-07-14 21:18:48.867878] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:37.548 [2024-07-14 21:18:48.867902] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:37.548 [2024-07-14 21:18:48.867913] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:37.548 [2024-07-14 21:18:48.867925] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:37.548 [2024-07-14 21:18:48.867936] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:37.548 [2024-07-14 21:18:48.867948] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:37.548 [2024-07-14 21:18:48.867959] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:37.548 [2024-07-14 21:18:48.867973] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:37.548 [2024-07-14 21:18:48.867984] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.548 [2024-07-14 21:18:48.867997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:37.548 [2024-07-14 21:18:48.868007] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:37.548 [2024-07-14 21:18:48.868020] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.548 [2024-07-14 21:18:48.868030] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:37.548 [2024-07-14 21:18:48.868043] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:37.548 [2024-07-14 21:18:48.868054] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:37.548 [2024-07-14 21:18:48.868068] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.548 [2024-07-14 21:18:48.868080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:37.548 [2024-07-14 21:18:48.868095] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:37.548 [2024-07-14 21:18:48.868106] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:37.548 [2024-07-14 21:18:48.868119] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:37.548 [2024-07-14 21:18:48.868129] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:37.548 [2024-07-14 21:18:48.868142] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:37.548 [2024-07-14 21:18:48.868157] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:37.548 [2024-07-14 21:18:48.868174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:37.548 [2024-07-14 21:18:48.868188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:37.548 [2024-07-14 21:18:48.868202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:37.548 [2024-07-14 21:18:48.868214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:37.548 [2024-07-14 21:18:48.868227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:37.548 [2024-07-14 21:18:48.868239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:37.548 [2024-07-14 21:18:48.868252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:37.548 [2024-07-14 21:18:48.868263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:37.548 [2024-07-14 21:18:48.868277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:37.548 [2024-07-14 21:18:48.868288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:37.548 [2024-07-14 21:18:48.868305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:37.548 [2024-07-14 21:18:48.868317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:37.548 [2024-07-14 21:18:48.868330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:37.548 [2024-07-14 21:18:48.868342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:37.548 [2024-07-14 21:18:48.868355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:37.548 [2024-07-14 21:18:48.868366] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:37.548 [2024-07-14 21:18:48.868392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:37.548 [2024-07-14 21:18:48.868416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:37.548 [2024-07-14 21:18:48.868430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:37.548 [2024-07-14 21:18:48.868442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:37.548 [2024-07-14 21:18:48.868455] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:37.548 [2024-07-14 21:18:48.868468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.548 [2024-07-14 21:18:48.868482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:37.548 [2024-07-14 21:18:48.868497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.076 ms 00:19:37.548 [2024-07-14 21:18:48.868511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.548 [2024-07-14 21:18:48.868557] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:37.548 [2024-07-14 21:18:48.868578] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:40.075 [2024-07-14 21:18:51.116364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.075 [2024-07-14 21:18:51.116462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:40.076 [2024-07-14 21:18:51.116486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2247.821 ms 00:19:40.076 [2024-07-14 21:18:51.116506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-07-14 21:18:51.155133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.076 [2024-07-14 21:18:51.155213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:40.076 [2024-07-14 21:18:51.155251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.362 ms 00:19:40.076 [2024-07-14 21:18:51.155266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-07-14 21:18:51.155470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.076 [2024-07-14 21:18:51.155492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:40.076 [2024-07-14 21:18:51.155504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:40.076 [2024-07-14 21:18:51.155533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-07-14 21:18:51.192158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.076 [2024-07-14 21:18:51.192230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:40.076 [2024-07-14 21:18:51.192279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.562 ms 00:19:40.076 [2024-07-14 21:18:51.192291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-07-14 21:18:51.192340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.076 [2024-07-14 21:18:51.192362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:40.076 [2024-07-14 21:18:51.192374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:40.076 [2024-07-14 21:18:51.192429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-07-14 21:18:51.192834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.076 [2024-07-14 21:18:51.192908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:40.076 [2024-07-14 21:18:51.192924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:19:40.076 [2024-07-14 21:18:51.192938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-07-14 21:18:51.193111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.076 [2024-07-14 21:18:51.193131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:40.076 [2024-07-14 21:18:51.193144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:19:40.076 [2024-07-14 21:18:51.193159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-07-14 21:18:51.209512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.076 [2024-07-14 21:18:51.209570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:40.076 [2024-07-14 21:18:51.209602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.326 ms 00:19:40.076 [2024-07-14 21:18:51.209631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-07-14 21:18:51.223541] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:40.076 [2024-07-14 21:18:51.228774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.076 [2024-07-14 21:18:51.228851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:40.076 [2024-07-14 21:18:51.228898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.022 ms 00:19:40.076 [2024-07-14 21:18:51.228920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-07-14 21:18:51.296324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.076 [2024-07-14 21:18:51.296450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:40.076 [2024-07-14 21:18:51.296478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.355 ms 00:19:40.076 [2024-07-14 21:18:51.296491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-07-14 21:18:51.296721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.076 [2024-07-14 21:18:51.296741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:40.076 [2024-07-14 21:18:51.296760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:19:40.076 [2024-07-14 21:18:51.296773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-07-14 21:18:51.329648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.076 [2024-07-14 21:18:51.329705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:40.076 [2024-07-14 21:18:51.329727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.786 ms 00:19:40.076 [2024-07-14 21:18:51.329740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-07-14 21:18:51.361612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.076 [2024-07-14 21:18:51.361673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:40.076 [2024-07-14 21:18:51.361710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.817 ms 00:19:40.076 [2024-07-14 21:18:51.361722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-07-14 21:18:51.362496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.076 [2024-07-14 21:18:51.362574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:40.076 [2024-07-14 21:18:51.362608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:19:40.076 [2024-07-14 21:18:51.362620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-07-14 21:18:51.450876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.076 [2024-07-14 21:18:51.450942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:40.076 [2024-07-14 21:18:51.450969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.192 ms 00:19:40.076 [2024-07-14 21:18:51.450982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-07-14 21:18:51.483042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.076 [2024-07-14 21:18:51.483095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:40.076 [2024-07-14 21:18:51.483117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.005 ms 00:19:40.076 [2024-07-14 21:18:51.483130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-07-14 21:18:51.515015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.076 [2024-07-14 21:18:51.515066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:40.076 [2024-07-14 21:18:51.515088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.828 ms 00:19:40.076 [2024-07-14 21:18:51.515100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-07-14 21:18:51.546766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.076 [2024-07-14 21:18:51.546825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:40.076 [2024-07-14 21:18:51.546849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.615 ms 00:19:40.076 [2024-07-14 21:18:51.546862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-07-14 21:18:51.546920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.076 [2024-07-14 21:18:51.546938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:40.076 [2024-07-14 21:18:51.546957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:40.076 [2024-07-14 21:18:51.546969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-07-14 21:18:51.547082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.076 [2024-07-14 21:18:51.547101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:40.076 [2024-07-14 21:18:51.547116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:40.076 [2024-07-14 21:18:51.547128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-07-14 21:18:51.548103] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2693.282 ms, result 0 00:19:40.076 { 00:19:40.076 "name": "ftl0", 00:19:40.076 "uuid": "e392f773-62cb-4a20-85fe-920d26f85288" 00:19:40.076 } 00:19:40.076 21:18:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:40.076 21:18:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:19:40.076 21:18:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:19:40.334 21:18:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:40.593 [2024-07-14 21:18:51.972539] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:40.593 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:40.593 Zero copy mechanism will not be used. 00:19:40.593 Running I/O for 4 seconds... 00:19:44.778 00:19:44.778 Latency(us) 00:19:44.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.778 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:44.778 ftl0 : 4.00 1769.39 117.50 0.00 0.00 591.51 227.14 1027.72 00:19:44.778 =================================================================================================================== 00:19:44.778 Total : 1769.39 117.50 0.00 0.00 591.51 227.14 1027.72 00:19:44.778 [2024-07-14 21:18:55.981587] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:44.778 0 00:19:44.778 21:18:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:44.778 [2024-07-14 21:18:56.116272] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:44.778 Running I/O for 4 seconds... 00:19:49.000 00:19:49.000 Latency(us) 00:19:49.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.000 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:49.000 ftl0 : 4.02 7750.58 30.28 0.00 0.00 16466.12 323.96 30146.56 00:19:49.001 =================================================================================================================== 00:19:49.001 Total : 7750.58 30.28 0.00 0.00 16466.12 0.00 30146.56 00:19:49.001 [2024-07-14 21:19:00.147357] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:49.001 0 00:19:49.001 21:19:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:49.001 [2024-07-14 21:19:00.281546] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:49.001 Running I/O for 4 seconds... 00:19:53.186 00:19:53.186 Latency(us) 00:19:53.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.186 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:53.186 Verification LBA range: start 0x0 length 0x1400000 00:19:53.186 ftl0 : 4.01 5951.87 23.25 0.00 0.00 21428.51 368.64 31218.97 00:19:53.186 =================================================================================================================== 00:19:53.186 Total : 5951.87 23.25 0.00 0.00 21428.51 0.00 31218.97 00:19:53.186 [2024-07-14 21:19:04.312983] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:53.186 0 00:19:53.186 21:19:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:53.186 [2024-07-14 21:19:04.566321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.186 [2024-07-14 21:19:04.566392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:53.186 [2024-07-14 21:19:04.566431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:53.186 [2024-07-14 21:19:04.566443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.186 [2024-07-14 21:19:04.566480] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:53.186 [2024-07-14 21:19:04.569723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.186 [2024-07-14 21:19:04.569774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:53.186 [2024-07-14 21:19:04.569805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.221 ms 00:19:53.187 [2024-07-14 21:19:04.569846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.187 [2024-07-14 21:19:04.571838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.187 [2024-07-14 21:19:04.571958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:53.187 [2024-07-14 21:19:04.571976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.948 ms 00:19:53.187 [2024-07-14 21:19:04.571990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.447 [2024-07-14 21:19:04.747148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.447 [2024-07-14 21:19:04.747280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:53.447 [2024-07-14 21:19:04.747306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 175.132 ms 00:19:53.447 [2024-07-14 21:19:04.747323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.447 [2024-07-14 21:19:04.754245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.447 [2024-07-14 21:19:04.754298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:53.447 [2024-07-14 21:19:04.754329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.876 ms 00:19:53.447 [2024-07-14 21:19:04.754342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.447 [2024-07-14 21:19:04.786352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.447 [2024-07-14 21:19:04.786418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:53.447 [2024-07-14 21:19:04.786453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.925 ms 00:19:53.447 [2024-07-14 21:19:04.786467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.447 [2024-07-14 21:19:04.804513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.447 [2024-07-14 21:19:04.804580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:53.447 [2024-07-14 21:19:04.804599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.000 ms 00:19:53.447 [2024-07-14 21:19:04.804618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.447 [2024-07-14 21:19:04.804851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.447 [2024-07-14 21:19:04.804891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:53.447 [2024-07-14 21:19:04.804905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:19:53.447 [2024-07-14 21:19:04.804922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.447 [2024-07-14 21:19:04.834721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.447 [2024-07-14 21:19:04.834771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:53.447 [2024-07-14 21:19:04.834788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.777 ms 00:19:53.447 [2024-07-14 21:19:04.834821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.447 [2024-07-14 21:19:04.866257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.447 [2024-07-14 21:19:04.866358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:53.447 [2024-07-14 21:19:04.866392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.365 ms 00:19:53.447 [2024-07-14 21:19:04.866407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.447 [2024-07-14 21:19:04.897651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.447 [2024-07-14 21:19:04.897713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:53.447 [2024-07-14 21:19:04.897747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.196 ms 00:19:53.447 [2024-07-14 21:19:04.897761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.447 [2024-07-14 21:19:04.927899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.447 [2024-07-14 21:19:04.927961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:53.447 [2024-07-14 21:19:04.927995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.003 ms 00:19:53.447 [2024-07-14 21:19:04.928013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.447 [2024-07-14 21:19:04.928059] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:53.447 [2024-07-14 21:19:04.928087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:53.447 [2024-07-14 21:19:04.928896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.928908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.928923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.928935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.928949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.928961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.928978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.928990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:53.448 [2024-07-14 21:19:04.929498] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:53.448 [2024-07-14 21:19:04.929510] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e392f773-62cb-4a20-85fe-920d26f85288 00:19:53.448 [2024-07-14 21:19:04.929525] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:53.448 [2024-07-14 21:19:04.929537] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:53.448 [2024-07-14 21:19:04.929550] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:53.448 [2024-07-14 21:19:04.929562] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:53.448 [2024-07-14 21:19:04.929577] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:53.448 [2024-07-14 21:19:04.929589] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:53.448 [2024-07-14 21:19:04.929602] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:53.448 [2024-07-14 21:19:04.929613] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:53.448 [2024-07-14 21:19:04.929627] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:53.448 [2024-07-14 21:19:04.929639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.448 [2024-07-14 21:19:04.929653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:53.448 [2024-07-14 21:19:04.929666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.583 ms 00:19:53.448 [2024-07-14 21:19:04.929680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.448 [2024-07-14 21:19:04.946016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.448 [2024-07-14 21:19:04.946078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:53.448 [2024-07-14 21:19:04.946098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.276 ms 00:19:53.448 [2024-07-14 21:19:04.946112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.448 [2024-07-14 21:19:04.946541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.448 [2024-07-14 21:19:04.946576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:53.448 [2024-07-14 21:19:04.946592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:19:53.448 [2024-07-14 21:19:04.946606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.448 [2024-07-14 21:19:04.986103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.448 [2024-07-14 21:19:04.986179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:53.448 [2024-07-14 21:19:04.986214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.448 [2024-07-14 21:19:04.986247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.448 [2024-07-14 21:19:04.986336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.448 [2024-07-14 21:19:04.986354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:53.448 [2024-07-14 21:19:04.986366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.448 [2024-07-14 21:19:04.986379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.448 [2024-07-14 21:19:04.986527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.448 [2024-07-14 21:19:04.986553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:53.448 [2024-07-14 21:19:04.986566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.448 [2024-07-14 21:19:04.986583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.448 [2024-07-14 21:19:04.986606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.448 [2024-07-14 21:19:04.986623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:53.448 [2024-07-14 21:19:04.986635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.448 [2024-07-14 21:19:04.986648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.708 [2024-07-14 21:19:05.080703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.708 [2024-07-14 21:19:05.080870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:53.708 [2024-07-14 21:19:05.080907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.708 [2024-07-14 21:19:05.080924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.708 [2024-07-14 21:19:05.157785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.709 [2024-07-14 21:19:05.157916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:53.709 [2024-07-14 21:19:05.157936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.709 [2024-07-14 21:19:05.157950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.709 [2024-07-14 21:19:05.158053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.709 [2024-07-14 21:19:05.158075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:53.709 [2024-07-14 21:19:05.158088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.709 [2024-07-14 21:19:05.158102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.709 [2024-07-14 21:19:05.158193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.709 [2024-07-14 21:19:05.158214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:53.709 [2024-07-14 21:19:05.158227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.709 [2024-07-14 21:19:05.158241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.709 [2024-07-14 21:19:05.158363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.709 [2024-07-14 21:19:05.158388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:53.709 [2024-07-14 21:19:05.158402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.709 [2024-07-14 21:19:05.158419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.709 [2024-07-14 21:19:05.158471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.709 [2024-07-14 21:19:05.158501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:53.709 [2024-07-14 21:19:05.158515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.709 [2024-07-14 21:19:05.158530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.709 [2024-07-14 21:19:05.158576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.709 [2024-07-14 21:19:05.158596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:53.709 [2024-07-14 21:19:05.158608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.709 [2024-07-14 21:19:05.158621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.709 [2024-07-14 21:19:05.158680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.709 [2024-07-14 21:19:05.158707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:53.709 [2024-07-14 21:19:05.158721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.709 [2024-07-14 21:19:05.158735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.709 [2024-07-14 21:19:05.158905] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 592.544 ms, result 0 00:19:53.709 true 00:19:53.709 21:19:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 79673 00:19:53.709 21:19:05 ftl.ftl_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 79673 ']' 00:19:53.709 21:19:05 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # kill -0 79673 00:19:53.709 21:19:05 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # uname 00:19:53.709 21:19:05 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:53.709 21:19:05 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79673 00:19:53.709 killing process with pid 79673 00:19:53.709 Received shutdown signal, test time was about 4.000000 seconds 00:19:53.709 00:19:53.709 Latency(us) 00:19:53.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.709 =================================================================================================================== 00:19:53.709 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:53.709 21:19:05 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:53.709 21:19:05 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:53.709 21:19:05 ftl.ftl_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79673' 00:19:53.709 21:19:05 ftl.ftl_bdevperf -- common/autotest_common.sh@967 -- # kill 79673 00:19:53.709 21:19:05 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # wait 79673 00:19:55.088 21:19:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:19:55.088 21:19:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:19:55.088 21:19:06 ftl.ftl_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:55.088 21:19:06 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:55.088 Remove shared memory files 00:19:55.088 21:19:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:19:55.088 21:19:06 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:55.088 21:19:06 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:55.088 21:19:06 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:55.088 21:19:06 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:55.088 21:19:06 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:55.088 21:19:06 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:55.088 ************************************ 00:19:55.088 END TEST ftl_bdevperf 00:19:55.088 ************************************ 00:19:55.088 00:19:55.088 real 0m22.494s 00:19:55.088 user 0m26.271s 00:19:55.088 sys 0m1.064s 00:19:55.088 21:19:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:55.088 21:19:06 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:55.088 21:19:06 ftl -- common/autotest_common.sh@1142 -- # return 0 00:19:55.088 21:19:06 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:55.088 21:19:06 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:55.088 21:19:06 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:55.088 21:19:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:55.089 ************************************ 00:19:55.089 START TEST ftl_trim 00:19:55.089 ************************************ 00:19:55.089 21:19:06 ftl.ftl_trim -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:55.089 * Looking for test storage... 00:19:55.089 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:55.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=80025 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 80025 00:19:55.089 21:19:06 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:55.089 21:19:06 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 80025 ']' 00:19:55.089 21:19:06 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.089 21:19:06 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.089 21:19:06 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.089 21:19:06 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.089 21:19:06 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:55.348 [2024-07-14 21:19:06.660613] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:55.348 [2024-07-14 21:19:06.660817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80025 ] 00:19:55.348 [2024-07-14 21:19:06.832884] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:55.606 [2024-07-14 21:19:07.041564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.606 [2024-07-14 21:19:07.041678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.606 [2024-07-14 21:19:07.041692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.541 21:19:07 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.541 21:19:07 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:19:56.541 21:19:07 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:56.541 21:19:07 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:56.541 21:19:07 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:56.541 21:19:07 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:56.541 21:19:07 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:56.541 21:19:07 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:56.541 21:19:08 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:56.541 21:19:08 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:56.541 21:19:08 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:56.541 21:19:08 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:19:56.541 21:19:08 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:56.541 21:19:08 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:19:56.541 21:19:08 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:19:56.541 21:19:08 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:56.799 21:19:08 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:56.799 { 00:19:56.799 "name": "nvme0n1", 00:19:56.799 "aliases": [ 00:19:56.799 "9a0d0a30-79b5-4afa-b392-e358996e64b5" 00:19:56.799 ], 00:19:56.799 "product_name": "NVMe disk", 00:19:56.799 "block_size": 4096, 00:19:56.799 "num_blocks": 1310720, 00:19:56.799 "uuid": "9a0d0a30-79b5-4afa-b392-e358996e64b5", 00:19:56.799 "assigned_rate_limits": { 00:19:56.799 "rw_ios_per_sec": 0, 00:19:56.799 "rw_mbytes_per_sec": 0, 00:19:56.799 "r_mbytes_per_sec": 0, 00:19:56.799 "w_mbytes_per_sec": 0 00:19:56.799 }, 00:19:56.799 "claimed": true, 00:19:56.799 "claim_type": "read_many_write_one", 00:19:56.799 "zoned": false, 00:19:56.799 "supported_io_types": { 00:19:56.799 "read": true, 00:19:56.799 "write": true, 00:19:56.799 "unmap": true, 00:19:56.799 "flush": true, 00:19:56.799 "reset": true, 00:19:56.799 "nvme_admin": true, 00:19:56.799 "nvme_io": true, 00:19:56.799 "nvme_io_md": false, 00:19:56.799 "write_zeroes": true, 00:19:56.799 "zcopy": false, 00:19:56.799 "get_zone_info": false, 00:19:56.799 "zone_management": false, 00:19:56.799 "zone_append": false, 00:19:56.799 "compare": true, 00:19:56.799 "compare_and_write": false, 00:19:56.799 "abort": true, 00:19:56.799 "seek_hole": false, 00:19:56.799 "seek_data": false, 00:19:56.799 "copy": true, 00:19:56.799 "nvme_iov_md": false 00:19:56.799 }, 00:19:56.799 "driver_specific": { 00:19:56.799 "nvme": [ 00:19:56.799 { 00:19:56.799 "pci_address": "0000:00:11.0", 00:19:56.799 "trid": { 00:19:56.799 "trtype": "PCIe", 00:19:56.799 "traddr": "0000:00:11.0" 00:19:56.799 }, 00:19:56.799 "ctrlr_data": { 00:19:56.800 "cntlid": 0, 00:19:56.800 "vendor_id": "0x1b36", 00:19:56.800 "model_number": "QEMU NVMe Ctrl", 00:19:56.800 "serial_number": "12341", 00:19:56.800 "firmware_revision": "8.0.0", 00:19:56.800 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:56.800 "oacs": { 00:19:56.800 "security": 0, 00:19:56.800 "format": 1, 00:19:56.800 "firmware": 0, 00:19:56.800 "ns_manage": 1 00:19:56.800 }, 00:19:56.800 "multi_ctrlr": false, 00:19:56.800 "ana_reporting": false 00:19:56.800 }, 00:19:56.800 "vs": { 00:19:56.800 "nvme_version": "1.4" 00:19:56.800 }, 00:19:56.800 "ns_data": { 00:19:56.800 "id": 1, 00:19:56.800 "can_share": false 00:19:56.800 } 00:19:56.800 } 00:19:56.800 ], 00:19:56.800 "mp_policy": "active_passive" 00:19:56.800 } 00:19:56.800 } 00:19:56.800 ]' 00:19:56.800 21:19:08 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:56.800 21:19:08 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:19:56.800 21:19:08 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:57.058 21:19:08 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:19:57.058 21:19:08 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:19:57.058 21:19:08 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:19:57.058 21:19:08 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:57.058 21:19:08 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:57.058 21:19:08 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:57.058 21:19:08 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:57.058 21:19:08 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:57.316 21:19:08 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=9e9a25d0-2b25-4114-a981-441d4f9fac27 00:19:57.316 21:19:08 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:57.316 21:19:08 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9e9a25d0-2b25-4114-a981-441d4f9fac27 00:19:57.316 21:19:08 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:57.574 21:19:09 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=545d5dfe-9626-4097-9cd9-0ac04e846900 00:19:57.574 21:19:09 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 545d5dfe-9626-4097-9cd9-0ac04e846900 00:19:57.833 21:19:09 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=348f26c2-3232-4962-8b10-0d66316ac02c 00:19:57.834 21:19:09 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 348f26c2-3232-4962-8b10-0d66316ac02c 00:19:57.834 21:19:09 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:57.834 21:19:09 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:57.834 21:19:09 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=348f26c2-3232-4962-8b10-0d66316ac02c 00:19:57.834 21:19:09 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:57.834 21:19:09 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 348f26c2-3232-4962-8b10-0d66316ac02c 00:19:57.834 21:19:09 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=348f26c2-3232-4962-8b10-0d66316ac02c 00:19:57.834 21:19:09 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:57.834 21:19:09 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:19:57.834 21:19:09 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:19:57.834 21:19:09 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 348f26c2-3232-4962-8b10-0d66316ac02c 00:19:58.091 21:19:09 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:58.091 { 00:19:58.091 "name": "348f26c2-3232-4962-8b10-0d66316ac02c", 00:19:58.091 "aliases": [ 00:19:58.091 "lvs/nvme0n1p0" 00:19:58.091 ], 00:19:58.091 "product_name": "Logical Volume", 00:19:58.091 "block_size": 4096, 00:19:58.091 "num_blocks": 26476544, 00:19:58.091 "uuid": "348f26c2-3232-4962-8b10-0d66316ac02c", 00:19:58.091 "assigned_rate_limits": { 00:19:58.091 "rw_ios_per_sec": 0, 00:19:58.091 "rw_mbytes_per_sec": 0, 00:19:58.091 "r_mbytes_per_sec": 0, 00:19:58.091 "w_mbytes_per_sec": 0 00:19:58.091 }, 00:19:58.091 "claimed": false, 00:19:58.091 "zoned": false, 00:19:58.091 "supported_io_types": { 00:19:58.091 "read": true, 00:19:58.091 "write": true, 00:19:58.091 "unmap": true, 00:19:58.091 "flush": false, 00:19:58.091 "reset": true, 00:19:58.091 "nvme_admin": false, 00:19:58.091 "nvme_io": false, 00:19:58.091 "nvme_io_md": false, 00:19:58.091 "write_zeroes": true, 00:19:58.091 "zcopy": false, 00:19:58.091 "get_zone_info": false, 00:19:58.091 "zone_management": false, 00:19:58.091 "zone_append": false, 00:19:58.091 "compare": false, 00:19:58.091 "compare_and_write": false, 00:19:58.091 "abort": false, 00:19:58.091 "seek_hole": true, 00:19:58.091 "seek_data": true, 00:19:58.091 "copy": false, 00:19:58.091 "nvme_iov_md": false 00:19:58.091 }, 00:19:58.091 "driver_specific": { 00:19:58.091 "lvol": { 00:19:58.091 "lvol_store_uuid": "545d5dfe-9626-4097-9cd9-0ac04e846900", 00:19:58.091 "base_bdev": "nvme0n1", 00:19:58.091 "thin_provision": true, 00:19:58.091 "num_allocated_clusters": 0, 00:19:58.091 "snapshot": false, 00:19:58.091 "clone": false, 00:19:58.091 "esnap_clone": false 00:19:58.091 } 00:19:58.091 } 00:19:58.091 } 00:19:58.091 ]' 00:19:58.091 21:19:09 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:58.349 21:19:09 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:19:58.349 21:19:09 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:58.349 21:19:09 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:58.349 21:19:09 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:58.349 21:19:09 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:19:58.349 21:19:09 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:58.349 21:19:09 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:58.349 21:19:09 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:58.607 21:19:10 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:58.607 21:19:10 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:58.607 21:19:10 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 348f26c2-3232-4962-8b10-0d66316ac02c 00:19:58.607 21:19:10 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=348f26c2-3232-4962-8b10-0d66316ac02c 00:19:58.607 21:19:10 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:58.607 21:19:10 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:19:58.607 21:19:10 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:19:58.607 21:19:10 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 348f26c2-3232-4962-8b10-0d66316ac02c 00:19:58.865 21:19:10 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:58.865 { 00:19:58.865 "name": "348f26c2-3232-4962-8b10-0d66316ac02c", 00:19:58.865 "aliases": [ 00:19:58.865 "lvs/nvme0n1p0" 00:19:58.865 ], 00:19:58.865 "product_name": "Logical Volume", 00:19:58.865 "block_size": 4096, 00:19:58.865 "num_blocks": 26476544, 00:19:58.865 "uuid": "348f26c2-3232-4962-8b10-0d66316ac02c", 00:19:58.865 "assigned_rate_limits": { 00:19:58.865 "rw_ios_per_sec": 0, 00:19:58.865 "rw_mbytes_per_sec": 0, 00:19:58.865 "r_mbytes_per_sec": 0, 00:19:58.865 "w_mbytes_per_sec": 0 00:19:58.865 }, 00:19:58.865 "claimed": false, 00:19:58.865 "zoned": false, 00:19:58.865 "supported_io_types": { 00:19:58.865 "read": true, 00:19:58.865 "write": true, 00:19:58.865 "unmap": true, 00:19:58.865 "flush": false, 00:19:58.865 "reset": true, 00:19:58.865 "nvme_admin": false, 00:19:58.865 "nvme_io": false, 00:19:58.865 "nvme_io_md": false, 00:19:58.865 "write_zeroes": true, 00:19:58.865 "zcopy": false, 00:19:58.865 "get_zone_info": false, 00:19:58.865 "zone_management": false, 00:19:58.865 "zone_append": false, 00:19:58.865 "compare": false, 00:19:58.865 "compare_and_write": false, 00:19:58.865 "abort": false, 00:19:58.865 "seek_hole": true, 00:19:58.865 "seek_data": true, 00:19:58.865 "copy": false, 00:19:58.865 "nvme_iov_md": false 00:19:58.865 }, 00:19:58.865 "driver_specific": { 00:19:58.865 "lvol": { 00:19:58.865 "lvol_store_uuid": "545d5dfe-9626-4097-9cd9-0ac04e846900", 00:19:58.865 "base_bdev": "nvme0n1", 00:19:58.865 "thin_provision": true, 00:19:58.865 "num_allocated_clusters": 0, 00:19:58.865 "snapshot": false, 00:19:58.865 "clone": false, 00:19:58.865 "esnap_clone": false 00:19:58.865 } 00:19:58.865 } 00:19:58.865 } 00:19:58.865 ]' 00:19:58.865 21:19:10 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:58.865 21:19:10 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:19:58.865 21:19:10 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:58.865 21:19:10 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:58.865 21:19:10 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:58.865 21:19:10 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:19:58.865 21:19:10 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:58.865 21:19:10 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:59.123 21:19:10 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:59.123 21:19:10 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:59.123 21:19:10 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 348f26c2-3232-4962-8b10-0d66316ac02c 00:19:59.123 21:19:10 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=348f26c2-3232-4962-8b10-0d66316ac02c 00:19:59.123 21:19:10 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:59.123 21:19:10 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:19:59.123 21:19:10 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:19:59.123 21:19:10 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 348f26c2-3232-4962-8b10-0d66316ac02c 00:19:59.381 21:19:10 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:59.381 { 00:19:59.381 "name": "348f26c2-3232-4962-8b10-0d66316ac02c", 00:19:59.381 "aliases": [ 00:19:59.381 "lvs/nvme0n1p0" 00:19:59.381 ], 00:19:59.381 "product_name": "Logical Volume", 00:19:59.381 "block_size": 4096, 00:19:59.381 "num_blocks": 26476544, 00:19:59.381 "uuid": "348f26c2-3232-4962-8b10-0d66316ac02c", 00:19:59.381 "assigned_rate_limits": { 00:19:59.381 "rw_ios_per_sec": 0, 00:19:59.381 "rw_mbytes_per_sec": 0, 00:19:59.381 "r_mbytes_per_sec": 0, 00:19:59.381 "w_mbytes_per_sec": 0 00:19:59.381 }, 00:19:59.381 "claimed": false, 00:19:59.381 "zoned": false, 00:19:59.381 "supported_io_types": { 00:19:59.381 "read": true, 00:19:59.381 "write": true, 00:19:59.381 "unmap": true, 00:19:59.381 "flush": false, 00:19:59.381 "reset": true, 00:19:59.381 "nvme_admin": false, 00:19:59.381 "nvme_io": false, 00:19:59.381 "nvme_io_md": false, 00:19:59.381 "write_zeroes": true, 00:19:59.381 "zcopy": false, 00:19:59.381 "get_zone_info": false, 00:19:59.381 "zone_management": false, 00:19:59.381 "zone_append": false, 00:19:59.381 "compare": false, 00:19:59.381 "compare_and_write": false, 00:19:59.381 "abort": false, 00:19:59.381 "seek_hole": true, 00:19:59.381 "seek_data": true, 00:19:59.381 "copy": false, 00:19:59.381 "nvme_iov_md": false 00:19:59.381 }, 00:19:59.381 "driver_specific": { 00:19:59.381 "lvol": { 00:19:59.381 "lvol_store_uuid": "545d5dfe-9626-4097-9cd9-0ac04e846900", 00:19:59.381 "base_bdev": "nvme0n1", 00:19:59.381 "thin_provision": true, 00:19:59.381 "num_allocated_clusters": 0, 00:19:59.381 "snapshot": false, 00:19:59.381 "clone": false, 00:19:59.381 "esnap_clone": false 00:19:59.381 } 00:19:59.381 } 00:19:59.381 } 00:19:59.381 ]' 00:19:59.381 21:19:10 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:59.381 21:19:10 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:19:59.381 21:19:10 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:59.641 21:19:10 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:59.641 21:19:10 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:59.641 21:19:10 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:19:59.641 21:19:10 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:59.641 21:19:10 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 348f26c2-3232-4962-8b10-0d66316ac02c -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:59.641 [2024-07-14 21:19:11.173658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.641 [2024-07-14 21:19:11.173717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:59.641 [2024-07-14 21:19:11.173755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:59.641 [2024-07-14 21:19:11.173771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.641 [2024-07-14 21:19:11.177180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.641 [2024-07-14 21:19:11.177240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:59.641 [2024-07-14 21:19:11.177273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.373 ms 00:19:59.641 [2024-07-14 21:19:11.177287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.641 [2024-07-14 21:19:11.177461] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:59.641 [2024-07-14 21:19:11.178486] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:59.641 [2024-07-14 21:19:11.178526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.641 [2024-07-14 21:19:11.178563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:59.641 [2024-07-14 21:19:11.178577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.108 ms 00:19:59.641 [2024-07-14 21:19:11.178591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.641 [2024-07-14 21:19:11.178843] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b1692d4e-1846-41c9-a805-9c9f076300af 00:19:59.641 [2024-07-14 21:19:11.179833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.641 [2024-07-14 21:19:11.179884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:59.641 [2024-07-14 21:19:11.179903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:59.641 [2024-07-14 21:19:11.179916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.641 [2024-07-14 21:19:11.184433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.641 [2024-07-14 21:19:11.184484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:59.641 [2024-07-14 21:19:11.184505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.422 ms 00:19:59.641 [2024-07-14 21:19:11.184518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.641 [2024-07-14 21:19:11.184717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.641 [2024-07-14 21:19:11.184740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:59.641 [2024-07-14 21:19:11.184757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:19:59.641 [2024-07-14 21:19:11.184784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.641 [2024-07-14 21:19:11.184883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.641 [2024-07-14 21:19:11.184903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:59.641 [2024-07-14 21:19:11.184923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:59.641 [2024-07-14 21:19:11.184935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.641 [2024-07-14 21:19:11.184984] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:59.901 [2024-07-14 21:19:11.189738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.901 [2024-07-14 21:19:11.189785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:59.901 [2024-07-14 21:19:11.189849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.766 ms 00:19:59.901 [2024-07-14 21:19:11.189883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.901 [2024-07-14 21:19:11.189961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.901 [2024-07-14 21:19:11.189995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:59.901 [2024-07-14 21:19:11.190010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:59.901 [2024-07-14 21:19:11.190024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.901 [2024-07-14 21:19:11.190064] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:59.901 [2024-07-14 21:19:11.190227] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:59.901 [2024-07-14 21:19:11.190246] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:59.901 [2024-07-14 21:19:11.190266] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:59.901 [2024-07-14 21:19:11.190282] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:59.901 [2024-07-14 21:19:11.190298] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:59.901 [2024-07-14 21:19:11.190311] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:59.901 [2024-07-14 21:19:11.190325] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:59.901 [2024-07-14 21:19:11.190340] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:59.901 [2024-07-14 21:19:11.190381] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:59.901 [2024-07-14 21:19:11.190395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.901 [2024-07-14 21:19:11.190410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:59.901 [2024-07-14 21:19:11.190422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:19:59.901 [2024-07-14 21:19:11.190436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.901 [2024-07-14 21:19:11.190542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.901 [2024-07-14 21:19:11.190561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:59.901 [2024-07-14 21:19:11.190574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:59.901 [2024-07-14 21:19:11.190588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.901 [2024-07-14 21:19:11.190719] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:59.901 [2024-07-14 21:19:11.190748] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:59.901 [2024-07-14 21:19:11.190762] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:59.901 [2024-07-14 21:19:11.190776] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.901 [2024-07-14 21:19:11.190789] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:59.901 [2024-07-14 21:19:11.190818] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:59.901 [2024-07-14 21:19:11.190832] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:59.901 [2024-07-14 21:19:11.190845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:59.901 [2024-07-14 21:19:11.190857] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:59.901 [2024-07-14 21:19:11.190870] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:59.901 [2024-07-14 21:19:11.190881] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:59.901 [2024-07-14 21:19:11.190894] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:59.901 [2024-07-14 21:19:11.190906] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:59.901 [2024-07-14 21:19:11.190920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:59.901 [2024-07-14 21:19:11.190932] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:59.901 [2024-07-14 21:19:11.190945] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.901 [2024-07-14 21:19:11.190956] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:59.901 [2024-07-14 21:19:11.190971] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:59.901 [2024-07-14 21:19:11.190983] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.901 [2024-07-14 21:19:11.190996] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:59.901 [2024-07-14 21:19:11.191007] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:59.901 [2024-07-14 21:19:11.191020] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:59.901 [2024-07-14 21:19:11.191031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:59.901 [2024-07-14 21:19:11.191045] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:59.901 [2024-07-14 21:19:11.191056] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:59.901 [2024-07-14 21:19:11.191069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:59.902 [2024-07-14 21:19:11.191082] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:59.902 [2024-07-14 21:19:11.191095] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:59.902 [2024-07-14 21:19:11.191106] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:59.902 [2024-07-14 21:19:11.191119] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:59.902 [2024-07-14 21:19:11.191130] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:59.902 [2024-07-14 21:19:11.191144] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:59.902 [2024-07-14 21:19:11.191155] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:59.902 [2024-07-14 21:19:11.191170] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:59.902 [2024-07-14 21:19:11.191181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:59.902 [2024-07-14 21:19:11.191194] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:59.902 [2024-07-14 21:19:11.191205] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:59.902 [2024-07-14 21:19:11.191219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:59.902 [2024-07-14 21:19:11.191231] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:59.902 [2024-07-14 21:19:11.191245] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.902 [2024-07-14 21:19:11.191256] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:59.902 [2024-07-14 21:19:11.191270] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:59.902 [2024-07-14 21:19:11.191281] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.902 [2024-07-14 21:19:11.191293] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:59.902 [2024-07-14 21:19:11.191321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:59.902 [2024-07-14 21:19:11.191334] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:59.902 [2024-07-14 21:19:11.191345] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.902 [2024-07-14 21:19:11.191359] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:59.902 [2024-07-14 21:19:11.191370] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:59.902 [2024-07-14 21:19:11.191385] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:59.902 [2024-07-14 21:19:11.191396] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:59.902 [2024-07-14 21:19:11.191408] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:59.902 [2024-07-14 21:19:11.191419] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:59.902 [2024-07-14 21:19:11.191436] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:59.902 [2024-07-14 21:19:11.191453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:59.902 [2024-07-14 21:19:11.191469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:59.902 [2024-07-14 21:19:11.191481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:59.902 [2024-07-14 21:19:11.191495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:59.902 [2024-07-14 21:19:11.191507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:59.902 [2024-07-14 21:19:11.191521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:59.902 [2024-07-14 21:19:11.191533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:59.902 [2024-07-14 21:19:11.191546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:59.902 [2024-07-14 21:19:11.191558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:59.902 [2024-07-14 21:19:11.191573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:59.902 [2024-07-14 21:19:11.191585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:59.902 [2024-07-14 21:19:11.191600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:59.902 [2024-07-14 21:19:11.191612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:59.902 [2024-07-14 21:19:11.191626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:59.902 [2024-07-14 21:19:11.191638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:59.902 [2024-07-14 21:19:11.191651] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:59.902 [2024-07-14 21:19:11.191664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:59.902 [2024-07-14 21:19:11.191679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:59.902 [2024-07-14 21:19:11.191692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:59.902 [2024-07-14 21:19:11.191705] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:59.902 [2024-07-14 21:19:11.191717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:59.902 [2024-07-14 21:19:11.191732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.902 [2024-07-14 21:19:11.191744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:59.902 [2024-07-14 21:19:11.191758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.076 ms 00:19:59.902 [2024-07-14 21:19:11.191770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.902 [2024-07-14 21:19:11.191886] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:59.902 [2024-07-14 21:19:11.191907] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:01.824 [2024-07-14 21:19:13.239787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.824 [2024-07-14 21:19:13.240095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:01.824 [2024-07-14 21:19:13.240238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2047.908 ms 00:20:01.824 [2024-07-14 21:19:13.240293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.824 [2024-07-14 21:19:13.273352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.824 [2024-07-14 21:19:13.273653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:01.824 [2024-07-14 21:19:13.273790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.617 ms 00:20:01.824 [2024-07-14 21:19:13.273923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.824 [2024-07-14 21:19:13.274224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.824 [2024-07-14 21:19:13.274371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:01.824 [2024-07-14 21:19:13.274514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:01.824 [2024-07-14 21:19:13.274659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.824 [2024-07-14 21:19:13.329486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.824 [2024-07-14 21:19:13.329770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:01.824 [2024-07-14 21:19:13.329984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.730 ms 00:20:01.824 [2024-07-14 21:19:13.330169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.824 [2024-07-14 21:19:13.330412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.824 [2024-07-14 21:19:13.330505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:01.824 [2024-07-14 21:19:13.330712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:01.824 [2024-07-14 21:19:13.330745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.824 [2024-07-14 21:19:13.331202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.824 [2024-07-14 21:19:13.331242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:01.824 [2024-07-14 21:19:13.331269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:20:01.824 [2024-07-14 21:19:13.331287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.824 [2024-07-14 21:19:13.331503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.824 [2024-07-14 21:19:13.331526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:01.824 [2024-07-14 21:19:13.331547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:20:01.824 [2024-07-14 21:19:13.331565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.824 [2024-07-14 21:19:13.349972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.824 [2024-07-14 21:19:13.350024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:01.824 [2024-07-14 21:19:13.350047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.347 ms 00:20:01.824 [2024-07-14 21:19:13.350061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.824 [2024-07-14 21:19:13.363546] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:02.084 [2024-07-14 21:19:13.377802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.084 [2024-07-14 21:19:13.377879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:02.084 [2024-07-14 21:19:13.377901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.564 ms 00:20:02.084 [2024-07-14 21:19:13.377915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.084 [2024-07-14 21:19:13.442985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.084 [2024-07-14 21:19:13.443054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:02.084 [2024-07-14 21:19:13.443077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.935 ms 00:20:02.084 [2024-07-14 21:19:13.443091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.084 [2024-07-14 21:19:13.443382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.084 [2024-07-14 21:19:13.443408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:02.084 [2024-07-14 21:19:13.443422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:20:02.084 [2024-07-14 21:19:13.443440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.084 [2024-07-14 21:19:13.475252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.084 [2024-07-14 21:19:13.475305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:02.084 [2024-07-14 21:19:13.475342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.768 ms 00:20:02.084 [2024-07-14 21:19:13.475357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.084 [2024-07-14 21:19:13.506980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.084 [2024-07-14 21:19:13.507036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:02.084 [2024-07-14 21:19:13.507057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.508 ms 00:20:02.084 [2024-07-14 21:19:13.507072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.084 [2024-07-14 21:19:13.507895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.084 [2024-07-14 21:19:13.507938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:02.084 [2024-07-14 21:19:13.507956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:20:02.084 [2024-07-14 21:19:13.507971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.084 [2024-07-14 21:19:13.599665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.084 [2024-07-14 21:19:13.599753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:02.084 [2024-07-14 21:19:13.599791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.654 ms 00:20:02.084 [2024-07-14 21:19:13.599810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.342 [2024-07-14 21:19:13.632944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.342 [2024-07-14 21:19:13.633020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:02.342 [2024-07-14 21:19:13.633041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.033 ms 00:20:02.342 [2024-07-14 21:19:13.633059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.342 [2024-07-14 21:19:13.664438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.342 [2024-07-14 21:19:13.664490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:02.342 [2024-07-14 21:19:13.664509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.294 ms 00:20:02.342 [2024-07-14 21:19:13.664523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.342 [2024-07-14 21:19:13.695489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.342 [2024-07-14 21:19:13.695550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:02.342 [2024-07-14 21:19:13.695585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.884 ms 00:20:02.342 [2024-07-14 21:19:13.695599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.342 [2024-07-14 21:19:13.695679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.342 [2024-07-14 21:19:13.695702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:02.342 [2024-07-14 21:19:13.695716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:02.342 [2024-07-14 21:19:13.695732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.342 [2024-07-14 21:19:13.695860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.342 [2024-07-14 21:19:13.695884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:02.342 [2024-07-14 21:19:13.695899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:02.342 [2024-07-14 21:19:13.695933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.342 [2024-07-14 21:19:13.696948] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:02.342 [2024-07-14 21:19:13.701135] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2522.957 ms, result 0 00:20:02.342 [2024-07-14 21:19:13.701984] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:02.342 { 00:20:02.342 "name": "ftl0", 00:20:02.342 "uuid": "b1692d4e-1846-41c9-a805-9c9f076300af" 00:20:02.342 } 00:20:02.342 21:19:13 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:20:02.342 21:19:13 ftl.ftl_trim -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:20:02.342 21:19:13 ftl.ftl_trim -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:02.342 21:19:13 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local i 00:20:02.342 21:19:13 ftl.ftl_trim -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:02.342 21:19:13 ftl.ftl_trim -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:02.342 21:19:13 ftl.ftl_trim -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:02.598 21:19:14 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:20:02.855 [ 00:20:02.855 { 00:20:02.855 "name": "ftl0", 00:20:02.855 "aliases": [ 00:20:02.855 "b1692d4e-1846-41c9-a805-9c9f076300af" 00:20:02.855 ], 00:20:02.855 "product_name": "FTL disk", 00:20:02.855 "block_size": 4096, 00:20:02.855 "num_blocks": 23592960, 00:20:02.855 "uuid": "b1692d4e-1846-41c9-a805-9c9f076300af", 00:20:02.855 "assigned_rate_limits": { 00:20:02.855 "rw_ios_per_sec": 0, 00:20:02.855 "rw_mbytes_per_sec": 0, 00:20:02.855 "r_mbytes_per_sec": 0, 00:20:02.855 "w_mbytes_per_sec": 0 00:20:02.855 }, 00:20:02.855 "claimed": false, 00:20:02.855 "zoned": false, 00:20:02.855 "supported_io_types": { 00:20:02.855 "read": true, 00:20:02.855 "write": true, 00:20:02.855 "unmap": true, 00:20:02.855 "flush": true, 00:20:02.855 "reset": false, 00:20:02.855 "nvme_admin": false, 00:20:02.855 "nvme_io": false, 00:20:02.855 "nvme_io_md": false, 00:20:02.855 "write_zeroes": true, 00:20:02.855 "zcopy": false, 00:20:02.855 "get_zone_info": false, 00:20:02.855 "zone_management": false, 00:20:02.855 "zone_append": false, 00:20:02.855 "compare": false, 00:20:02.855 "compare_and_write": false, 00:20:02.855 "abort": false, 00:20:02.855 "seek_hole": false, 00:20:02.855 "seek_data": false, 00:20:02.855 "copy": false, 00:20:02.855 "nvme_iov_md": false 00:20:02.855 }, 00:20:02.855 "driver_specific": { 00:20:02.855 "ftl": { 00:20:02.855 "base_bdev": "348f26c2-3232-4962-8b10-0d66316ac02c", 00:20:02.855 "cache": "nvc0n1p0" 00:20:02.855 } 00:20:02.855 } 00:20:02.855 } 00:20:02.855 ] 00:20:02.855 21:19:14 ftl.ftl_trim -- common/autotest_common.sh@905 -- # return 0 00:20:02.855 21:19:14 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:20:02.855 21:19:14 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:03.112 21:19:14 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:20:03.112 21:19:14 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:20:03.369 21:19:14 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:20:03.369 { 00:20:03.369 "name": "ftl0", 00:20:03.369 "aliases": [ 00:20:03.369 "b1692d4e-1846-41c9-a805-9c9f076300af" 00:20:03.369 ], 00:20:03.369 "product_name": "FTL disk", 00:20:03.369 "block_size": 4096, 00:20:03.369 "num_blocks": 23592960, 00:20:03.369 "uuid": "b1692d4e-1846-41c9-a805-9c9f076300af", 00:20:03.369 "assigned_rate_limits": { 00:20:03.369 "rw_ios_per_sec": 0, 00:20:03.369 "rw_mbytes_per_sec": 0, 00:20:03.369 "r_mbytes_per_sec": 0, 00:20:03.369 "w_mbytes_per_sec": 0 00:20:03.369 }, 00:20:03.369 "claimed": false, 00:20:03.369 "zoned": false, 00:20:03.369 "supported_io_types": { 00:20:03.369 "read": true, 00:20:03.369 "write": true, 00:20:03.369 "unmap": true, 00:20:03.369 "flush": true, 00:20:03.369 "reset": false, 00:20:03.369 "nvme_admin": false, 00:20:03.369 "nvme_io": false, 00:20:03.369 "nvme_io_md": false, 00:20:03.369 "write_zeroes": true, 00:20:03.369 "zcopy": false, 00:20:03.369 "get_zone_info": false, 00:20:03.369 "zone_management": false, 00:20:03.369 "zone_append": false, 00:20:03.369 "compare": false, 00:20:03.369 "compare_and_write": false, 00:20:03.369 "abort": false, 00:20:03.369 "seek_hole": false, 00:20:03.369 "seek_data": false, 00:20:03.369 "copy": false, 00:20:03.370 "nvme_iov_md": false 00:20:03.370 }, 00:20:03.370 "driver_specific": { 00:20:03.370 "ftl": { 00:20:03.370 "base_bdev": "348f26c2-3232-4962-8b10-0d66316ac02c", 00:20:03.370 "cache": "nvc0n1p0" 00:20:03.370 } 00:20:03.370 } 00:20:03.370 } 00:20:03.370 ]' 00:20:03.370 21:19:14 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:20:03.370 21:19:14 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:20:03.370 21:19:14 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:03.627 [2024-07-14 21:19:15.053967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.627 [2024-07-14 21:19:15.054037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:03.627 [2024-07-14 21:19:15.054081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:03.628 [2024-07-14 21:19:15.054094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.628 [2024-07-14 21:19:15.054159] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:03.628 [2024-07-14 21:19:15.057554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.628 [2024-07-14 21:19:15.057608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:03.628 [2024-07-14 21:19:15.057640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.373 ms 00:20:03.628 [2024-07-14 21:19:15.057659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.628 [2024-07-14 21:19:15.058264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.628 [2024-07-14 21:19:15.058304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:03.628 [2024-07-14 21:19:15.058324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:20:03.628 [2024-07-14 21:19:15.058338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.628 [2024-07-14 21:19:15.062079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.628 [2024-07-14 21:19:15.062131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:03.628 [2024-07-14 21:19:15.062147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.704 ms 00:20:03.628 [2024-07-14 21:19:15.062160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.628 [2024-07-14 21:19:15.069627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.628 [2024-07-14 21:19:15.069680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:03.628 [2024-07-14 21:19:15.069712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.416 ms 00:20:03.628 [2024-07-14 21:19:15.069725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.628 [2024-07-14 21:19:15.100456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.628 [2024-07-14 21:19:15.100521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:03.628 [2024-07-14 21:19:15.100541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.612 ms 00:20:03.628 [2024-07-14 21:19:15.100558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.628 [2024-07-14 21:19:15.118951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.628 [2024-07-14 21:19:15.119015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:03.628 [2024-07-14 21:19:15.119053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.295 ms 00:20:03.628 [2024-07-14 21:19:15.119068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.628 [2024-07-14 21:19:15.119317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.628 [2024-07-14 21:19:15.119353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:03.628 [2024-07-14 21:19:15.119370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:20:03.628 [2024-07-14 21:19:15.119383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.628 [2024-07-14 21:19:15.150276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.628 [2024-07-14 21:19:15.150337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:03.628 [2024-07-14 21:19:15.150372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.858 ms 00:20:03.628 [2024-07-14 21:19:15.150386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.887 [2024-07-14 21:19:15.181229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.887 [2024-07-14 21:19:15.181309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:03.887 [2024-07-14 21:19:15.181344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.752 ms 00:20:03.887 [2024-07-14 21:19:15.181360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.887 [2024-07-14 21:19:15.211488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.887 [2024-07-14 21:19:15.211550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:03.887 [2024-07-14 21:19:15.211585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.035 ms 00:20:03.887 [2024-07-14 21:19:15.211599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.887 [2024-07-14 21:19:15.242390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.887 [2024-07-14 21:19:15.242453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:03.887 [2024-07-14 21:19:15.242488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.642 ms 00:20:03.887 [2024-07-14 21:19:15.242502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.887 [2024-07-14 21:19:15.242603] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:03.887 [2024-07-14 21:19:15.242633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.242649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.242664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.242677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.242691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.242703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.242721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.242734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.242748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.242761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.242776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.242788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.242827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.242843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.242858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.242870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.242884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.242896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.242910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.242923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.242937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.242950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.242968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.242981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.242996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:03.887 [2024-07-14 21:19:15.243820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:03.888 [2024-07-14 21:19:15.243833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:03.888 [2024-07-14 21:19:15.243848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:03.888 [2024-07-14 21:19:15.243861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:03.888 [2024-07-14 21:19:15.243877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:03.888 [2024-07-14 21:19:15.243890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:03.888 [2024-07-14 21:19:15.243904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:03.888 [2024-07-14 21:19:15.243916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:03.888 [2024-07-14 21:19:15.243930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:03.888 [2024-07-14 21:19:15.243943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:03.888 [2024-07-14 21:19:15.243957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:03.888 [2024-07-14 21:19:15.243970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:03.888 [2024-07-14 21:19:15.243983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:03.888 [2024-07-14 21:19:15.243996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:03.888 [2024-07-14 21:19:15.244009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:03.888 [2024-07-14 21:19:15.244023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:03.888 [2024-07-14 21:19:15.244038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:03.888 [2024-07-14 21:19:15.244050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:03.888 [2024-07-14 21:19:15.244076] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:03.888 [2024-07-14 21:19:15.244090] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b1692d4e-1846-41c9-a805-9c9f076300af 00:20:03.888 [2024-07-14 21:19:15.244107] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:03.888 [2024-07-14 21:19:15.244118] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:03.888 [2024-07-14 21:19:15.244135] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:03.888 [2024-07-14 21:19:15.244147] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:03.888 [2024-07-14 21:19:15.244161] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:03.888 [2024-07-14 21:19:15.244173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:03.888 [2024-07-14 21:19:15.244186] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:03.888 [2024-07-14 21:19:15.244196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:03.888 [2024-07-14 21:19:15.244209] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:03.888 [2024-07-14 21:19:15.244221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.888 [2024-07-14 21:19:15.244236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:03.888 [2024-07-14 21:19:15.244250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.620 ms 00:20:03.888 [2024-07-14 21:19:15.244263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.888 [2024-07-14 21:19:15.260850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.888 [2024-07-14 21:19:15.260906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:03.888 [2024-07-14 21:19:15.260925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.548 ms 00:20:03.888 [2024-07-14 21:19:15.260942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.888 [2024-07-14 21:19:15.261418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.888 [2024-07-14 21:19:15.261456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:03.888 [2024-07-14 21:19:15.261473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:20:03.888 [2024-07-14 21:19:15.261487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.888 [2024-07-14 21:19:15.317656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.888 [2024-07-14 21:19:15.317723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:03.888 [2024-07-14 21:19:15.317758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.888 [2024-07-14 21:19:15.317772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.888 [2024-07-14 21:19:15.317931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.888 [2024-07-14 21:19:15.317955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:03.888 [2024-07-14 21:19:15.317969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.888 [2024-07-14 21:19:15.317982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.888 [2024-07-14 21:19:15.318083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.888 [2024-07-14 21:19:15.318110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:03.888 [2024-07-14 21:19:15.318124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.888 [2024-07-14 21:19:15.318140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.888 [2024-07-14 21:19:15.318180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.888 [2024-07-14 21:19:15.318197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:03.888 [2024-07-14 21:19:15.318210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.888 [2024-07-14 21:19:15.318223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.888 [2024-07-14 21:19:15.419066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.888 [2024-07-14 21:19:15.419149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:03.888 [2024-07-14 21:19:15.419185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.888 [2024-07-14 21:19:15.419198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.147 [2024-07-14 21:19:15.500487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.147 [2024-07-14 21:19:15.500575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:04.147 [2024-07-14 21:19:15.500595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.147 [2024-07-14 21:19:15.500610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.147 [2024-07-14 21:19:15.500734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.147 [2024-07-14 21:19:15.500758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:04.147 [2024-07-14 21:19:15.500789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.147 [2024-07-14 21:19:15.500805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.147 [2024-07-14 21:19:15.500898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.147 [2024-07-14 21:19:15.500918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:04.147 [2024-07-14 21:19:15.500931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.147 [2024-07-14 21:19:15.500945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.147 [2024-07-14 21:19:15.501093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.147 [2024-07-14 21:19:15.501117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:04.147 [2024-07-14 21:19:15.501150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.147 [2024-07-14 21:19:15.501168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.147 [2024-07-14 21:19:15.501242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.147 [2024-07-14 21:19:15.501273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:04.147 [2024-07-14 21:19:15.501288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.147 [2024-07-14 21:19:15.501302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.147 [2024-07-14 21:19:15.501364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.147 [2024-07-14 21:19:15.501388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:04.147 [2024-07-14 21:19:15.501402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.147 [2024-07-14 21:19:15.501421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.147 [2024-07-14 21:19:15.501492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.147 [2024-07-14 21:19:15.501515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:04.147 [2024-07-14 21:19:15.501528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.147 [2024-07-14 21:19:15.501542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.147 [2024-07-14 21:19:15.501769] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 447.798 ms, result 0 00:20:04.147 true 00:20:04.147 21:19:15 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 80025 00:20:04.147 21:19:15 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 80025 ']' 00:20:04.147 21:19:15 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 80025 00:20:04.147 21:19:15 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:20:04.147 21:19:15 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:04.147 21:19:15 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80025 00:20:04.147 21:19:15 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:04.147 21:19:15 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:04.147 killing process with pid 80025 00:20:04.147 21:19:15 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80025' 00:20:04.147 21:19:15 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 80025 00:20:04.147 21:19:15 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 80025 00:20:09.414 21:19:19 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:20:09.674 65536+0 records in 00:20:09.674 65536+0 records out 00:20:09.674 268435456 bytes (268 MB, 256 MiB) copied, 1.12845 s, 238 MB/s 00:20:09.674 21:19:21 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:09.674 [2024-07-14 21:19:21.191447] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:09.674 [2024-07-14 21:19:21.191593] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80220 ] 00:20:09.934 [2024-07-14 21:19:21.350039] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.193 [2024-07-14 21:19:21.526450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.452 [2024-07-14 21:19:21.817749] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:10.452 [2024-07-14 21:19:21.817884] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:10.452 [2024-07-14 21:19:21.978253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.452 [2024-07-14 21:19:21.978323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:10.452 [2024-07-14 21:19:21.978357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:10.452 [2024-07-14 21:19:21.978368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.452 [2024-07-14 21:19:21.981513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.452 [2024-07-14 21:19:21.981569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:10.452 [2024-07-14 21:19:21.981601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.119 ms 00:20:10.453 [2024-07-14 21:19:21.981612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.453 [2024-07-14 21:19:21.981742] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:10.453 [2024-07-14 21:19:21.982737] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:10.453 [2024-07-14 21:19:21.982841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.453 [2024-07-14 21:19:21.982890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:10.453 [2024-07-14 21:19:21.982903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.062 ms 00:20:10.453 [2024-07-14 21:19:21.982914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.453 [2024-07-14 21:19:21.984187] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:10.714 [2024-07-14 21:19:22.000122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.714 [2024-07-14 21:19:22.000196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:10.714 [2024-07-14 21:19:22.000234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.935 ms 00:20:10.714 [2024-07-14 21:19:22.000244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.714 [2024-07-14 21:19:22.000370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.714 [2024-07-14 21:19:22.000416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:10.714 [2024-07-14 21:19:22.000430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:10.714 [2024-07-14 21:19:22.000441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.714 [2024-07-14 21:19:22.004791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.714 [2024-07-14 21:19:22.004873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:10.714 [2024-07-14 21:19:22.004889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.290 ms 00:20:10.714 [2024-07-14 21:19:22.004901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.714 [2024-07-14 21:19:22.005027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.714 [2024-07-14 21:19:22.005047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:10.714 [2024-07-14 21:19:22.005061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:10.714 [2024-07-14 21:19:22.005071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.714 [2024-07-14 21:19:22.005113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.714 [2024-07-14 21:19:22.005127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:10.714 [2024-07-14 21:19:22.005139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:10.714 [2024-07-14 21:19:22.005154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.714 [2024-07-14 21:19:22.005186] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:10.714 [2024-07-14 21:19:22.009523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.714 [2024-07-14 21:19:22.009572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:10.714 [2024-07-14 21:19:22.009602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.346 ms 00:20:10.714 [2024-07-14 21:19:22.009624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.714 [2024-07-14 21:19:22.009704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.714 [2024-07-14 21:19:22.009720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:10.714 [2024-07-14 21:19:22.009733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:10.714 [2024-07-14 21:19:22.009743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.714 [2024-07-14 21:19:22.009774] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:10.714 [2024-07-14 21:19:22.009800] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:10.714 [2024-07-14 21:19:22.009877] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:10.714 [2024-07-14 21:19:22.009901] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:10.714 [2024-07-14 21:19:22.010007] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:10.714 [2024-07-14 21:19:22.010023] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:10.714 [2024-07-14 21:19:22.010038] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:10.714 [2024-07-14 21:19:22.010054] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:10.714 [2024-07-14 21:19:22.010067] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:10.714 [2024-07-14 21:19:22.010079] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:10.714 [2024-07-14 21:19:22.010096] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:10.714 [2024-07-14 21:19:22.010107] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:10.714 [2024-07-14 21:19:22.010117] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:10.714 [2024-07-14 21:19:22.010129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.714 [2024-07-14 21:19:22.010140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:10.714 [2024-07-14 21:19:22.010152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:20:10.714 [2024-07-14 21:19:22.010162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.714 [2024-07-14 21:19:22.010259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.714 [2024-07-14 21:19:22.010273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:10.714 [2024-07-14 21:19:22.010285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:10.714 [2024-07-14 21:19:22.010299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.714 [2024-07-14 21:19:22.010432] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:10.714 [2024-07-14 21:19:22.010460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:10.714 [2024-07-14 21:19:22.010474] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:10.714 [2024-07-14 21:19:22.010486] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.714 [2024-07-14 21:19:22.010498] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:10.714 [2024-07-14 21:19:22.010508] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:10.714 [2024-07-14 21:19:22.010519] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:10.714 [2024-07-14 21:19:22.010529] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:10.714 [2024-07-14 21:19:22.010539] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:10.714 [2024-07-14 21:19:22.010550] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:10.715 [2024-07-14 21:19:22.010560] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:10.715 [2024-07-14 21:19:22.010570] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:10.715 [2024-07-14 21:19:22.010580] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:10.715 [2024-07-14 21:19:22.010591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:10.715 [2024-07-14 21:19:22.010601] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:10.715 [2024-07-14 21:19:22.010611] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.715 [2024-07-14 21:19:22.010622] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:10.715 [2024-07-14 21:19:22.010631] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:10.715 [2024-07-14 21:19:22.010654] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.715 [2024-07-14 21:19:22.010666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:10.715 [2024-07-14 21:19:22.010676] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:10.715 [2024-07-14 21:19:22.010685] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:10.715 [2024-07-14 21:19:22.010695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:10.715 [2024-07-14 21:19:22.010705] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:10.715 [2024-07-14 21:19:22.010715] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:10.715 [2024-07-14 21:19:22.010725] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:10.715 [2024-07-14 21:19:22.010735] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:10.715 [2024-07-14 21:19:22.010745] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:10.715 [2024-07-14 21:19:22.010755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:10.715 [2024-07-14 21:19:22.010765] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:10.715 [2024-07-14 21:19:22.010775] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:10.715 [2024-07-14 21:19:22.010785] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:10.715 [2024-07-14 21:19:22.010815] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:10.715 [2024-07-14 21:19:22.010828] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:10.715 [2024-07-14 21:19:22.010839] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:10.715 [2024-07-14 21:19:22.010849] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:10.715 [2024-07-14 21:19:22.010860] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:10.715 [2024-07-14 21:19:22.010870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:10.715 [2024-07-14 21:19:22.010880] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:10.715 [2024-07-14 21:19:22.010890] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.715 [2024-07-14 21:19:22.010900] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:10.715 [2024-07-14 21:19:22.010910] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:10.715 [2024-07-14 21:19:22.010920] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.715 [2024-07-14 21:19:22.010929] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:10.715 [2024-07-14 21:19:22.010941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:10.715 [2024-07-14 21:19:22.010952] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:10.715 [2024-07-14 21:19:22.010962] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.715 [2024-07-14 21:19:22.010974] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:10.715 [2024-07-14 21:19:22.010984] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:10.715 [2024-07-14 21:19:22.010994] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:10.715 [2024-07-14 21:19:22.011004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:10.715 [2024-07-14 21:19:22.011014] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:10.715 [2024-07-14 21:19:22.011025] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:10.715 [2024-07-14 21:19:22.011036] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:10.715 [2024-07-14 21:19:22.011055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:10.715 [2024-07-14 21:19:22.011069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:10.715 [2024-07-14 21:19:22.011080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:10.715 [2024-07-14 21:19:22.011091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:10.715 [2024-07-14 21:19:22.011102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:10.715 [2024-07-14 21:19:22.011113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:10.715 [2024-07-14 21:19:22.011124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:10.715 [2024-07-14 21:19:22.011135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:10.715 [2024-07-14 21:19:22.011146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:10.715 [2024-07-14 21:19:22.011157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:10.715 [2024-07-14 21:19:22.011167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:10.715 [2024-07-14 21:19:22.011178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:10.715 [2024-07-14 21:19:22.011189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:10.715 [2024-07-14 21:19:22.011200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:10.715 [2024-07-14 21:19:22.011211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:10.715 [2024-07-14 21:19:22.011222] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:10.715 [2024-07-14 21:19:22.011234] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:10.715 [2024-07-14 21:19:22.011246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:10.715 [2024-07-14 21:19:22.011257] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:10.715 [2024-07-14 21:19:22.011268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:10.715 [2024-07-14 21:19:22.011279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:10.715 [2024-07-14 21:19:22.011291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.715 [2024-07-14 21:19:22.011303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:10.715 [2024-07-14 21:19:22.011314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.926 ms 00:20:10.715 [2024-07-14 21:19:22.011325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.715 [2024-07-14 21:19:22.058362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.715 [2024-07-14 21:19:22.058438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:10.715 [2024-07-14 21:19:22.058474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.963 ms 00:20:10.715 [2024-07-14 21:19:22.058485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.715 [2024-07-14 21:19:22.058679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.715 [2024-07-14 21:19:22.058698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:10.715 [2024-07-14 21:19:22.058728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:10.715 [2024-07-14 21:19:22.058744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.715 [2024-07-14 21:19:22.095724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.716 [2024-07-14 21:19:22.095836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:10.716 [2024-07-14 21:19:22.095873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.931 ms 00:20:10.716 [2024-07-14 21:19:22.095886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.716 [2024-07-14 21:19:22.096017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.716 [2024-07-14 21:19:22.096036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:10.716 [2024-07-14 21:19:22.096049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:10.716 [2024-07-14 21:19:22.096060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.716 [2024-07-14 21:19:22.096395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.716 [2024-07-14 21:19:22.096423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:10.716 [2024-07-14 21:19:22.096438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:20:10.716 [2024-07-14 21:19:22.096449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.716 [2024-07-14 21:19:22.096603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.716 [2024-07-14 21:19:22.096633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:10.716 [2024-07-14 21:19:22.096647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:20:10.716 [2024-07-14 21:19:22.096658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.716 [2024-07-14 21:19:22.113949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.716 [2024-07-14 21:19:22.113991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:10.716 [2024-07-14 21:19:22.114009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.260 ms 00:20:10.716 [2024-07-14 21:19:22.114020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.716 [2024-07-14 21:19:22.130823] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:10.716 [2024-07-14 21:19:22.130878] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:10.716 [2024-07-14 21:19:22.130896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.716 [2024-07-14 21:19:22.130918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:10.716 [2024-07-14 21:19:22.130931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.715 ms 00:20:10.716 [2024-07-14 21:19:22.130942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.716 [2024-07-14 21:19:22.159368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.716 [2024-07-14 21:19:22.159425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:10.716 [2024-07-14 21:19:22.159458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.329 ms 00:20:10.716 [2024-07-14 21:19:22.159468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.716 [2024-07-14 21:19:22.174897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.716 [2024-07-14 21:19:22.174953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:10.716 [2024-07-14 21:19:22.174985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.338 ms 00:20:10.716 [2024-07-14 21:19:22.174996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.716 [2024-07-14 21:19:22.190030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.716 [2024-07-14 21:19:22.190083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:10.716 [2024-07-14 21:19:22.190114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.947 ms 00:20:10.716 [2024-07-14 21:19:22.190124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.716 [2024-07-14 21:19:22.191012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.716 [2024-07-14 21:19:22.191060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:10.716 [2024-07-14 21:19:22.191079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.774 ms 00:20:10.716 [2024-07-14 21:19:22.191090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.976 [2024-07-14 21:19:22.260103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.976 [2024-07-14 21:19:22.260187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:10.976 [2024-07-14 21:19:22.260229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.980 ms 00:20:10.976 [2024-07-14 21:19:22.260255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.976 [2024-07-14 21:19:22.272269] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:10.976 [2024-07-14 21:19:22.285154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.976 [2024-07-14 21:19:22.285244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:10.976 [2024-07-14 21:19:22.285280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.722 ms 00:20:10.976 [2024-07-14 21:19:22.285291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.976 [2024-07-14 21:19:22.285419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.976 [2024-07-14 21:19:22.285438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:10.976 [2024-07-14 21:19:22.285450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:10.976 [2024-07-14 21:19:22.285464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.976 [2024-07-14 21:19:22.285527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.976 [2024-07-14 21:19:22.285558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:10.976 [2024-07-14 21:19:22.285569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:10.976 [2024-07-14 21:19:22.285580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.976 [2024-07-14 21:19:22.285627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.976 [2024-07-14 21:19:22.285640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:10.976 [2024-07-14 21:19:22.285651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:10.976 [2024-07-14 21:19:22.285662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.976 [2024-07-14 21:19:22.285703] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:10.976 [2024-07-14 21:19:22.285719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.976 [2024-07-14 21:19:22.285732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:10.976 [2024-07-14 21:19:22.285745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:10.976 [2024-07-14 21:19:22.285757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.976 [2024-07-14 21:19:22.314847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.976 [2024-07-14 21:19:22.314905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:10.976 [2024-07-14 21:19:22.314939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.060 ms 00:20:10.976 [2024-07-14 21:19:22.314957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.976 [2024-07-14 21:19:22.315076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.976 [2024-07-14 21:19:22.315095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:10.976 [2024-07-14 21:19:22.315108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:10.976 [2024-07-14 21:19:22.315118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.976 [2024-07-14 21:19:22.316164] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:10.976 [2024-07-14 21:19:22.320050] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 337.545 ms, result 0 00:20:10.976 [2024-07-14 21:19:22.320967] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:10.976 [2024-07-14 21:19:22.336561] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:22.086  Copying: 23/256 [MB] (23 MBps) Copying: 46/256 [MB] (23 MBps) Copying: 69/256 [MB] (23 MBps) Copying: 92/256 [MB] (22 MBps) Copying: 115/256 [MB] (22 MBps) Copying: 138/256 [MB] (23 MBps) Copying: 162/256 [MB] (23 MBps) Copying: 185/256 [MB] (23 MBps) Copying: 208/256 [MB] (23 MBps) Copying: 230/256 [MB] (21 MBps) Copying: 252/256 [MB] (22 MBps) Copying: 256/256 [MB] (average 22 MBps)[2024-07-14 21:19:33.473514] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:22.086 [2024-07-14 21:19:33.485182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.086 [2024-07-14 21:19:33.485279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:22.086 [2024-07-14 21:19:33.485313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:22.086 [2024-07-14 21:19:33.485325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.086 [2024-07-14 21:19:33.485353] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:22.086 [2024-07-14 21:19:33.488441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.086 [2024-07-14 21:19:33.488475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:22.086 [2024-07-14 21:19:33.488505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.069 ms 00:20:22.086 [2024-07-14 21:19:33.488523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.086 [2024-07-14 21:19:33.490544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.086 [2024-07-14 21:19:33.490599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:22.086 [2024-07-14 21:19:33.490631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.991 ms 00:20:22.086 [2024-07-14 21:19:33.490641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.086 [2024-07-14 21:19:33.497759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.086 [2024-07-14 21:19:33.497836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:22.086 [2024-07-14 21:19:33.497869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.095 ms 00:20:22.086 [2024-07-14 21:19:33.497880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.086 [2024-07-14 21:19:33.504920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.086 [2024-07-14 21:19:33.504971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:22.086 [2024-07-14 21:19:33.505000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.970 ms 00:20:22.086 [2024-07-14 21:19:33.505011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.086 [2024-07-14 21:19:33.533514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.086 [2024-07-14 21:19:33.533567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:22.086 [2024-07-14 21:19:33.533599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.453 ms 00:20:22.086 [2024-07-14 21:19:33.533609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.086 [2024-07-14 21:19:33.550452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.086 [2024-07-14 21:19:33.550506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:22.086 [2024-07-14 21:19:33.550537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.779 ms 00:20:22.086 [2024-07-14 21:19:33.550548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.086 [2024-07-14 21:19:33.550701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.086 [2024-07-14 21:19:33.550723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:22.086 [2024-07-14 21:19:33.550747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:20:22.086 [2024-07-14 21:19:33.550775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.086 [2024-07-14 21:19:33.580002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.086 [2024-07-14 21:19:33.580056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:22.086 [2024-07-14 21:19:33.580087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.206 ms 00:20:22.086 [2024-07-14 21:19:33.580098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.086 [2024-07-14 21:19:33.608835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.086 [2024-07-14 21:19:33.608909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:22.086 [2024-07-14 21:19:33.608941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.675 ms 00:20:22.086 [2024-07-14 21:19:33.608952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.347 [2024-07-14 21:19:33.638332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.347 [2024-07-14 21:19:33.638384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:22.347 [2024-07-14 21:19:33.638414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.319 ms 00:20:22.347 [2024-07-14 21:19:33.638424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.347 [2024-07-14 21:19:33.667383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.347 [2024-07-14 21:19:33.667436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:22.347 [2024-07-14 21:19:33.667467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.870 ms 00:20:22.347 [2024-07-14 21:19:33.667478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.347 [2024-07-14 21:19:33.667540] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:22.347 [2024-07-14 21:19:33.667563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.667990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.668001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.668013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.668024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.668035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.668047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.668058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.668069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.668080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.668091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.668102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.668114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:22.347 [2024-07-14 21:19:33.668125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:22.348 [2024-07-14 21:19:33.668793] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:22.348 [2024-07-14 21:19:33.668822] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b1692d4e-1846-41c9-a805-9c9f076300af 00:20:22.348 [2024-07-14 21:19:33.668849] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:22.348 [2024-07-14 21:19:33.668863] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:22.348 [2024-07-14 21:19:33.668874] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:22.348 [2024-07-14 21:19:33.668897] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:22.348 [2024-07-14 21:19:33.668908] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:22.348 [2024-07-14 21:19:33.668919] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:22.348 [2024-07-14 21:19:33.668929] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:22.348 [2024-07-14 21:19:33.668938] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:22.348 [2024-07-14 21:19:33.668949] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:22.348 [2024-07-14 21:19:33.668960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.348 [2024-07-14 21:19:33.668971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:22.348 [2024-07-14 21:19:33.668983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.422 ms 00:20:22.348 [2024-07-14 21:19:33.668993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.348 [2024-07-14 21:19:33.685915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.348 [2024-07-14 21:19:33.685956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:22.348 [2024-07-14 21:19:33.685974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.892 ms 00:20:22.348 [2024-07-14 21:19:33.685985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.348 [2024-07-14 21:19:33.686434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.348 [2024-07-14 21:19:33.686466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:22.348 [2024-07-14 21:19:33.686480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:20:22.348 [2024-07-14 21:19:33.686498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.348 [2024-07-14 21:19:33.725840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.348 [2024-07-14 21:19:33.725920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:22.348 [2024-07-14 21:19:33.725951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.348 [2024-07-14 21:19:33.725963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.348 [2024-07-14 21:19:33.726058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.348 [2024-07-14 21:19:33.726074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:22.348 [2024-07-14 21:19:33.726086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.348 [2024-07-14 21:19:33.726102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.348 [2024-07-14 21:19:33.726177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.348 [2024-07-14 21:19:33.726197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:22.348 [2024-07-14 21:19:33.726209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.348 [2024-07-14 21:19:33.726220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.348 [2024-07-14 21:19:33.726244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.348 [2024-07-14 21:19:33.726256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:22.348 [2024-07-14 21:19:33.726268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.348 [2024-07-14 21:19:33.726278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.348 [2024-07-14 21:19:33.818740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.348 [2024-07-14 21:19:33.818837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:22.348 [2024-07-14 21:19:33.818872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.348 [2024-07-14 21:19:33.818883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.608 [2024-07-14 21:19:33.901007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.608 [2024-07-14 21:19:33.901068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:22.608 [2024-07-14 21:19:33.901087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.608 [2024-07-14 21:19:33.901099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.608 [2024-07-14 21:19:33.901192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.608 [2024-07-14 21:19:33.901237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:22.608 [2024-07-14 21:19:33.901248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.608 [2024-07-14 21:19:33.901273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.608 [2024-07-14 21:19:33.901338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.608 [2024-07-14 21:19:33.901350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:22.608 [2024-07-14 21:19:33.901366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.608 [2024-07-14 21:19:33.901376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.608 [2024-07-14 21:19:33.901495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.608 [2024-07-14 21:19:33.901515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:22.608 [2024-07-14 21:19:33.901528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.608 [2024-07-14 21:19:33.901539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.608 [2024-07-14 21:19:33.901586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.608 [2024-07-14 21:19:33.901612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:22.608 [2024-07-14 21:19:33.901625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.608 [2024-07-14 21:19:33.901636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.608 [2024-07-14 21:19:33.901682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.608 [2024-07-14 21:19:33.901702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:22.608 [2024-07-14 21:19:33.901713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.608 [2024-07-14 21:19:33.901723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.608 [2024-07-14 21:19:33.901774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.608 [2024-07-14 21:19:33.901789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:22.608 [2024-07-14 21:19:33.901834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.608 [2024-07-14 21:19:33.901854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.608 [2024-07-14 21:19:33.902039] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 416.847 ms, result 0 00:20:23.547 00:20:23.547 00:20:23.547 21:19:34 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=80361 00:20:23.547 21:19:34 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:23.547 21:19:34 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 80361 00:20:23.547 21:19:34 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 80361 ']' 00:20:23.547 21:19:34 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.547 21:19:34 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:23.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.547 21:19:34 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.547 21:19:34 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:23.547 21:19:34 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:23.547 [2024-07-14 21:19:35.072477] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:23.547 [2024-07-14 21:19:35.072647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80361 ] 00:20:23.806 [2024-07-14 21:19:35.239161] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.066 [2024-07-14 21:19:35.411552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.633 21:19:36 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:24.633 21:19:36 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:20:24.633 21:19:36 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:24.892 [2024-07-14 21:19:36.334451] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:24.892 [2024-07-14 21:19:36.334554] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:25.152 [2024-07-14 21:19:36.511318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.152 [2024-07-14 21:19:36.511389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:25.152 [2024-07-14 21:19:36.511424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:25.152 [2024-07-14 21:19:36.511437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.152 [2024-07-14 21:19:36.514553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.152 [2024-07-14 21:19:36.514611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:25.153 [2024-07-14 21:19:36.514643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.091 ms 00:20:25.153 [2024-07-14 21:19:36.514655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.153 [2024-07-14 21:19:36.514782] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:25.153 [2024-07-14 21:19:36.515786] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:25.153 [2024-07-14 21:19:36.515869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.153 [2024-07-14 21:19:36.515902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:25.153 [2024-07-14 21:19:36.515916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.097 ms 00:20:25.153 [2024-07-14 21:19:36.515929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.153 [2024-07-14 21:19:36.517245] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:25.153 [2024-07-14 21:19:36.532131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.153 [2024-07-14 21:19:36.532186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:25.153 [2024-07-14 21:19:36.532236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.882 ms 00:20:25.153 [2024-07-14 21:19:36.532248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.153 [2024-07-14 21:19:36.532353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.153 [2024-07-14 21:19:36.532373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:25.153 [2024-07-14 21:19:36.532430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:25.153 [2024-07-14 21:19:36.532443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.153 [2024-07-14 21:19:36.536670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.153 [2024-07-14 21:19:36.536744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:25.153 [2024-07-14 21:19:36.536781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.161 ms 00:20:25.153 [2024-07-14 21:19:36.536793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.153 [2024-07-14 21:19:36.536933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.153 [2024-07-14 21:19:36.536954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:25.153 [2024-07-14 21:19:36.536969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:25.153 [2024-07-14 21:19:36.536996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.153 [2024-07-14 21:19:36.537061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.153 [2024-07-14 21:19:36.537075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:25.153 [2024-07-14 21:19:36.537090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:25.153 [2024-07-14 21:19:36.537102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.153 [2024-07-14 21:19:36.537138] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:25.153 [2024-07-14 21:19:36.541400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.153 [2024-07-14 21:19:36.541452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:25.153 [2024-07-14 21:19:36.541482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.273 ms 00:20:25.153 [2024-07-14 21:19:36.541495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.153 [2024-07-14 21:19:36.541559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.153 [2024-07-14 21:19:36.541582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:25.153 [2024-07-14 21:19:36.541595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:25.153 [2024-07-14 21:19:36.541610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.153 [2024-07-14 21:19:36.541638] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:25.153 [2024-07-14 21:19:36.541664] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:25.153 [2024-07-14 21:19:36.541746] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:25.153 [2024-07-14 21:19:36.541774] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:25.153 [2024-07-14 21:19:36.541895] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:25.153 [2024-07-14 21:19:36.541921] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:25.153 [2024-07-14 21:19:36.541940] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:25.153 [2024-07-14 21:19:36.541958] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:25.153 [2024-07-14 21:19:36.541971] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:25.153 [2024-07-14 21:19:36.541986] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:25.153 [2024-07-14 21:19:36.541997] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:25.153 [2024-07-14 21:19:36.542010] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:25.153 [2024-07-14 21:19:36.542022] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:25.153 [2024-07-14 21:19:36.542038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.153 [2024-07-14 21:19:36.542050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:25.153 [2024-07-14 21:19:36.542063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:20:25.153 [2024-07-14 21:19:36.542075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.153 [2024-07-14 21:19:36.542176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.153 [2024-07-14 21:19:36.542191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:25.153 [2024-07-14 21:19:36.542205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:25.153 [2024-07-14 21:19:36.542216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.153 [2024-07-14 21:19:36.542337] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:25.153 [2024-07-14 21:19:36.542357] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:25.153 [2024-07-14 21:19:36.542372] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:25.153 [2024-07-14 21:19:36.542384] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.153 [2024-07-14 21:19:36.542398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:25.153 [2024-07-14 21:19:36.542409] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:25.153 [2024-07-14 21:19:36.542423] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:25.153 [2024-07-14 21:19:36.542434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:25.153 [2024-07-14 21:19:36.542450] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:25.153 [2024-07-14 21:19:36.542460] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:25.153 [2024-07-14 21:19:36.542473] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:25.153 [2024-07-14 21:19:36.542484] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:25.153 [2024-07-14 21:19:36.542496] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:25.153 [2024-07-14 21:19:36.542507] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:25.153 [2024-07-14 21:19:36.542519] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:25.153 [2024-07-14 21:19:36.542530] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.153 [2024-07-14 21:19:36.542542] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:25.153 [2024-07-14 21:19:36.542555] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:25.153 [2024-07-14 21:19:36.542567] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.153 [2024-07-14 21:19:36.542578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:25.153 [2024-07-14 21:19:36.542591] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:25.153 [2024-07-14 21:19:36.542601] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.153 [2024-07-14 21:19:36.542613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:25.153 [2024-07-14 21:19:36.542624] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:25.153 [2024-07-14 21:19:36.542638] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.153 [2024-07-14 21:19:36.542649] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:25.153 [2024-07-14 21:19:36.542661] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:25.153 [2024-07-14 21:19:36.542682] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.153 [2024-07-14 21:19:36.542695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:25.153 [2024-07-14 21:19:36.542707] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:25.153 [2024-07-14 21:19:36.542720] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.153 [2024-07-14 21:19:36.542731] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:25.153 [2024-07-14 21:19:36.542744] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:25.153 [2024-07-14 21:19:36.542755] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:25.153 [2024-07-14 21:19:36.542767] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:25.153 [2024-07-14 21:19:36.542777] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:25.153 [2024-07-14 21:19:36.542790] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:25.153 [2024-07-14 21:19:36.542815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:25.153 [2024-07-14 21:19:36.542830] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:25.153 [2024-07-14 21:19:36.542840] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.153 [2024-07-14 21:19:36.542855] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:25.153 [2024-07-14 21:19:36.542866] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:25.153 [2024-07-14 21:19:36.542878] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.153 [2024-07-14 21:19:36.542889] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:25.153 [2024-07-14 21:19:36.542905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:25.153 [2024-07-14 21:19:36.542916] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:25.153 [2024-07-14 21:19:36.542929] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.153 [2024-07-14 21:19:36.542941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:25.153 [2024-07-14 21:19:36.542954] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:25.153 [2024-07-14 21:19:36.542966] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:25.153 [2024-07-14 21:19:36.542979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:25.153 [2024-07-14 21:19:36.542989] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:25.153 [2024-07-14 21:19:36.543002] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:25.154 [2024-07-14 21:19:36.543015] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:25.154 [2024-07-14 21:19:36.543032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:25.154 [2024-07-14 21:19:36.543045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:25.154 [2024-07-14 21:19:36.543063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:25.154 [2024-07-14 21:19:36.543074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:25.154 [2024-07-14 21:19:36.543088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:25.154 [2024-07-14 21:19:36.543100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:25.154 [2024-07-14 21:19:36.543113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:25.154 [2024-07-14 21:19:36.543125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:25.154 [2024-07-14 21:19:36.543138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:25.154 [2024-07-14 21:19:36.543150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:25.154 [2024-07-14 21:19:36.543163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:25.154 [2024-07-14 21:19:36.543175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:25.154 [2024-07-14 21:19:36.543188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:25.154 [2024-07-14 21:19:36.543200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:25.154 [2024-07-14 21:19:36.543213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:25.154 [2024-07-14 21:19:36.543225] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:25.154 [2024-07-14 21:19:36.543240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:25.154 [2024-07-14 21:19:36.543253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:25.154 [2024-07-14 21:19:36.543269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:25.154 [2024-07-14 21:19:36.543281] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:25.154 [2024-07-14 21:19:36.543294] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:25.154 [2024-07-14 21:19:36.543307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.154 [2024-07-14 21:19:36.543323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:25.154 [2024-07-14 21:19:36.543336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.040 ms 00:20:25.154 [2024-07-14 21:19:36.543349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.154 [2024-07-14 21:19:36.574372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.154 [2024-07-14 21:19:36.574448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:25.154 [2024-07-14 21:19:36.574484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.920 ms 00:20:25.154 [2024-07-14 21:19:36.574500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.154 [2024-07-14 21:19:36.574673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.154 [2024-07-14 21:19:36.574694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:25.154 [2024-07-14 21:19:36.574707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:25.154 [2024-07-14 21:19:36.574719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.154 [2024-07-14 21:19:36.610136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.154 [2024-07-14 21:19:36.610226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:25.154 [2024-07-14 21:19:36.610259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.373 ms 00:20:25.154 [2024-07-14 21:19:36.610273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.154 [2024-07-14 21:19:36.610383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.154 [2024-07-14 21:19:36.610403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:25.154 [2024-07-14 21:19:36.610417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:25.154 [2024-07-14 21:19:36.610429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.154 [2024-07-14 21:19:36.610779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.154 [2024-07-14 21:19:36.610824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:25.154 [2024-07-14 21:19:36.610845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:20:25.154 [2024-07-14 21:19:36.610858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.154 [2024-07-14 21:19:36.611009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.154 [2024-07-14 21:19:36.611030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:25.154 [2024-07-14 21:19:36.611043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:20:25.154 [2024-07-14 21:19:36.611056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.154 [2024-07-14 21:19:36.627451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.154 [2024-07-14 21:19:36.627513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:25.154 [2024-07-14 21:19:36.627545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.367 ms 00:20:25.154 [2024-07-14 21:19:36.627557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.154 [2024-07-14 21:19:36.642956] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:25.154 [2024-07-14 21:19:36.643014] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:25.154 [2024-07-14 21:19:36.643048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.154 [2024-07-14 21:19:36.643062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:25.154 [2024-07-14 21:19:36.643075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.356 ms 00:20:25.154 [2024-07-14 21:19:36.643087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.154 [2024-07-14 21:19:36.670479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.154 [2024-07-14 21:19:36.670536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:25.154 [2024-07-14 21:19:36.670569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.303 ms 00:20:25.154 [2024-07-14 21:19:36.670582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.154 [2024-07-14 21:19:36.685335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.154 [2024-07-14 21:19:36.685392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:25.154 [2024-07-14 21:19:36.685433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.666 ms 00:20:25.154 [2024-07-14 21:19:36.685448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.414 [2024-07-14 21:19:36.700354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.414 [2024-07-14 21:19:36.700432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:25.414 [2024-07-14 21:19:36.700449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.825 ms 00:20:25.414 [2024-07-14 21:19:36.700462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.414 [2024-07-14 21:19:36.701320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.414 [2024-07-14 21:19:36.701370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:25.414 [2024-07-14 21:19:36.701385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:20:25.414 [2024-07-14 21:19:36.701399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.414 [2024-07-14 21:19:36.788064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.414 [2024-07-14 21:19:36.788158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:25.414 [2024-07-14 21:19:36.788181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.634 ms 00:20:25.414 [2024-07-14 21:19:36.788195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.414 [2024-07-14 21:19:36.799965] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:25.414 [2024-07-14 21:19:36.813356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.414 [2024-07-14 21:19:36.813455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:25.414 [2024-07-14 21:19:36.813496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.989 ms 00:20:25.414 [2024-07-14 21:19:36.813511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.414 [2024-07-14 21:19:36.813649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.414 [2024-07-14 21:19:36.813668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:25.414 [2024-07-14 21:19:36.813683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:25.414 [2024-07-14 21:19:36.813695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.414 [2024-07-14 21:19:36.813776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.414 [2024-07-14 21:19:36.813792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:25.414 [2024-07-14 21:19:36.813807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:25.414 [2024-07-14 21:19:36.813818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.414 [2024-07-14 21:19:36.813883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.414 [2024-07-14 21:19:36.813898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:25.414 [2024-07-14 21:19:36.813916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:25.414 [2024-07-14 21:19:36.813928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.414 [2024-07-14 21:19:36.813969] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:25.414 [2024-07-14 21:19:36.813984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.414 [2024-07-14 21:19:36.814000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:25.414 [2024-07-14 21:19:36.814012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:25.414 [2024-07-14 21:19:36.814025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.414 [2024-07-14 21:19:36.843733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.414 [2024-07-14 21:19:36.843792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:25.414 [2024-07-14 21:19:36.843836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.678 ms 00:20:25.414 [2024-07-14 21:19:36.843851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.414 [2024-07-14 21:19:36.843968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.414 [2024-07-14 21:19:36.843993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:25.414 [2024-07-14 21:19:36.844022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:25.414 [2024-07-14 21:19:36.844051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.414 [2024-07-14 21:19:36.844997] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:25.414 [2024-07-14 21:19:36.849190] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 333.299 ms, result 0 00:20:25.414 [2024-07-14 21:19:36.850473] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:25.414 Some configs were skipped because the RPC state that can call them passed over. 00:20:25.414 21:19:36 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:25.673 [2024-07-14 21:19:37.136069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.673 [2024-07-14 21:19:37.136129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:25.673 [2024-07-14 21:19:37.136155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.621 ms 00:20:25.673 [2024-07-14 21:19:37.136168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.673 [2024-07-14 21:19:37.136216] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.779 ms, result 0 00:20:25.673 true 00:20:25.673 21:19:37 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:25.933 [2024-07-14 21:19:37.355876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.933 [2024-07-14 21:19:37.355937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:25.933 [2024-07-14 21:19:37.355957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.188 ms 00:20:25.933 [2024-07-14 21:19:37.355971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.933 [2024-07-14 21:19:37.356020] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.332 ms, result 0 00:20:25.933 true 00:20:25.933 21:19:37 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 80361 00:20:25.933 21:19:37 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 80361 ']' 00:20:25.933 21:19:37 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 80361 00:20:25.933 21:19:37 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:20:25.933 21:19:37 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:25.933 21:19:37 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80361 00:20:25.933 21:19:37 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:25.933 21:19:37 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:25.933 21:19:37 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80361' 00:20:25.933 killing process with pid 80361 00:20:25.933 21:19:37 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 80361 00:20:25.933 21:19:37 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 80361 00:20:26.872 [2024-07-14 21:19:38.277302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.872 [2024-07-14 21:19:38.277389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:26.872 [2024-07-14 21:19:38.277426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:26.872 [2024-07-14 21:19:38.277438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.872 [2024-07-14 21:19:38.277470] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:26.872 [2024-07-14 21:19:38.280725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.872 [2024-07-14 21:19:38.280806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:26.872 [2024-07-14 21:19:38.280858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.232 ms 00:20:26.872 [2024-07-14 21:19:38.280874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.872 [2024-07-14 21:19:38.281174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.872 [2024-07-14 21:19:38.281205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:26.872 [2024-07-14 21:19:38.281220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:20:26.872 [2024-07-14 21:19:38.281233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.872 [2024-07-14 21:19:38.285364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.872 [2024-07-14 21:19:38.285410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:26.872 [2024-07-14 21:19:38.285429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.107 ms 00:20:26.872 [2024-07-14 21:19:38.285443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.872 [2024-07-14 21:19:38.292446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.872 [2024-07-14 21:19:38.292501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:26.872 [2024-07-14 21:19:38.292532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.958 ms 00:20:26.872 [2024-07-14 21:19:38.292548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.872 [2024-07-14 21:19:38.304444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.872 [2024-07-14 21:19:38.304505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:26.872 [2024-07-14 21:19:38.304522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.839 ms 00:20:26.872 [2024-07-14 21:19:38.304537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.872 [2024-07-14 21:19:38.312909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.872 [2024-07-14 21:19:38.312967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:26.872 [2024-07-14 21:19:38.313001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.327 ms 00:20:26.872 [2024-07-14 21:19:38.313013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.872 [2024-07-14 21:19:38.313157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.872 [2024-07-14 21:19:38.313180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:26.872 [2024-07-14 21:19:38.313223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:20:26.872 [2024-07-14 21:19:38.313264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.872 [2024-07-14 21:19:38.325572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.872 [2024-07-14 21:19:38.325626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:26.872 [2024-07-14 21:19:38.325658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.283 ms 00:20:26.872 [2024-07-14 21:19:38.325670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.872 [2024-07-14 21:19:38.337651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.872 [2024-07-14 21:19:38.337705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:26.872 [2024-07-14 21:19:38.337736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.940 ms 00:20:26.872 [2024-07-14 21:19:38.337753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.872 [2024-07-14 21:19:38.349360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.872 [2024-07-14 21:19:38.349416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:26.872 [2024-07-14 21:19:38.349447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.568 ms 00:20:26.872 [2024-07-14 21:19:38.349460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.872 [2024-07-14 21:19:38.360915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.872 [2024-07-14 21:19:38.360985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:26.872 [2024-07-14 21:19:38.360999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.386 ms 00:20:26.872 [2024-07-14 21:19:38.361011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.872 [2024-07-14 21:19:38.361052] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:26.872 [2024-07-14 21:19:38.361078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:26.872 [2024-07-14 21:19:38.361968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.361982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.361994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:26.873 [2024-07-14 21:19:38.362441] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:26.873 [2024-07-14 21:19:38.362454] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b1692d4e-1846-41c9-a805-9c9f076300af 00:20:26.873 [2024-07-14 21:19:38.362472] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:26.873 [2024-07-14 21:19:38.362484] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:26.873 [2024-07-14 21:19:38.362497] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:26.873 [2024-07-14 21:19:38.362509] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:26.873 [2024-07-14 21:19:38.362521] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:26.873 [2024-07-14 21:19:38.362533] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:26.873 [2024-07-14 21:19:38.362546] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:26.873 [2024-07-14 21:19:38.362556] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:26.873 [2024-07-14 21:19:38.362580] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:26.873 [2024-07-14 21:19:38.362592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.873 [2024-07-14 21:19:38.362605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:26.873 [2024-07-14 21:19:38.362618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.541 ms 00:20:26.873 [2024-07-14 21:19:38.362631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.873 [2024-07-14 21:19:38.378311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.873 [2024-07-14 21:19:38.378383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:26.873 [2024-07-14 21:19:38.378398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.640 ms 00:20:26.873 [2024-07-14 21:19:38.378413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.873 [2024-07-14 21:19:38.378915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.873 [2024-07-14 21:19:38.378952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:26.873 [2024-07-14 21:19:38.378970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:20:26.873 [2024-07-14 21:19:38.378987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.131 [2024-07-14 21:19:38.431113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.131 [2024-07-14 21:19:38.431183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:27.131 [2024-07-14 21:19:38.431232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.131 [2024-07-14 21:19:38.431245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.131 [2024-07-14 21:19:38.431374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.131 [2024-07-14 21:19:38.431395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:27.131 [2024-07-14 21:19:38.431408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.131 [2024-07-14 21:19:38.431424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.131 [2024-07-14 21:19:38.431501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.131 [2024-07-14 21:19:38.431522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:27.131 [2024-07-14 21:19:38.431535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.131 [2024-07-14 21:19:38.431551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.131 [2024-07-14 21:19:38.431576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.131 [2024-07-14 21:19:38.431592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:27.131 [2024-07-14 21:19:38.431604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.131 [2024-07-14 21:19:38.431616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.131 [2024-07-14 21:19:38.529621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.131 [2024-07-14 21:19:38.529703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:27.131 [2024-07-14 21:19:38.529738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.131 [2024-07-14 21:19:38.529752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.131 [2024-07-14 21:19:38.614000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.131 [2024-07-14 21:19:38.614097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:27.131 [2024-07-14 21:19:38.614117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.131 [2024-07-14 21:19:38.614132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.131 [2024-07-14 21:19:38.614255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.131 [2024-07-14 21:19:38.614277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:27.131 [2024-07-14 21:19:38.614290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.131 [2024-07-14 21:19:38.614305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.131 [2024-07-14 21:19:38.614355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.131 [2024-07-14 21:19:38.614372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:27.131 [2024-07-14 21:19:38.614384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.131 [2024-07-14 21:19:38.614398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.131 [2024-07-14 21:19:38.614522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.131 [2024-07-14 21:19:38.614546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:27.131 [2024-07-14 21:19:38.614560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.131 [2024-07-14 21:19:38.614573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.132 [2024-07-14 21:19:38.614628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.132 [2024-07-14 21:19:38.614661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:27.132 [2024-07-14 21:19:38.614675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.132 [2024-07-14 21:19:38.614689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.132 [2024-07-14 21:19:38.614737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.132 [2024-07-14 21:19:38.614758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:27.132 [2024-07-14 21:19:38.614770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.132 [2024-07-14 21:19:38.614785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.132 [2024-07-14 21:19:38.614861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.132 [2024-07-14 21:19:38.614883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:27.132 [2024-07-14 21:19:38.614896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.132 [2024-07-14 21:19:38.614909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.132 [2024-07-14 21:19:38.615067] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 337.749 ms, result 0 00:20:28.067 21:19:39 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:28.067 21:19:39 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:28.067 [2024-07-14 21:19:39.604688] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:28.067 [2024-07-14 21:19:39.604936] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80421 ] 00:20:28.325 [2024-07-14 21:19:39.775303] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.584 [2024-07-14 21:19:39.956481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.843 [2024-07-14 21:19:40.242682] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:28.843 [2024-07-14 21:19:40.242804] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:29.103 [2024-07-14 21:19:40.402173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.103 [2024-07-14 21:19:40.402263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:29.103 [2024-07-14 21:19:40.402299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:29.103 [2024-07-14 21:19:40.402311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.103 [2024-07-14 21:19:40.405530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.103 [2024-07-14 21:19:40.405588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:29.103 [2024-07-14 21:19:40.405620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.191 ms 00:20:29.103 [2024-07-14 21:19:40.405631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.103 [2024-07-14 21:19:40.405766] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:29.103 [2024-07-14 21:19:40.406708] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:29.103 [2024-07-14 21:19:40.406778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.103 [2024-07-14 21:19:40.406793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:29.103 [2024-07-14 21:19:40.406826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.023 ms 00:20:29.103 [2024-07-14 21:19:40.406837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.103 [2024-07-14 21:19:40.408120] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:29.103 [2024-07-14 21:19:40.424437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.103 [2024-07-14 21:19:40.424481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:29.103 [2024-07-14 21:19:40.424505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.317 ms 00:20:29.103 [2024-07-14 21:19:40.424517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.103 [2024-07-14 21:19:40.424636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.103 [2024-07-14 21:19:40.424658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:29.103 [2024-07-14 21:19:40.424672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:29.103 [2024-07-14 21:19:40.424683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.103 [2024-07-14 21:19:40.429008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.103 [2024-07-14 21:19:40.429056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:29.103 [2024-07-14 21:19:40.429072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.266 ms 00:20:29.103 [2024-07-14 21:19:40.429083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.103 [2024-07-14 21:19:40.429207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.103 [2024-07-14 21:19:40.429228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:29.103 [2024-07-14 21:19:40.429241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:29.103 [2024-07-14 21:19:40.429252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.103 [2024-07-14 21:19:40.429327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.103 [2024-07-14 21:19:40.429343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:29.103 [2024-07-14 21:19:40.429356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:29.103 [2024-07-14 21:19:40.429370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.103 [2024-07-14 21:19:40.429406] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:29.103 [2024-07-14 21:19:40.433604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.103 [2024-07-14 21:19:40.433657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:29.103 [2024-07-14 21:19:40.433672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.208 ms 00:20:29.103 [2024-07-14 21:19:40.433683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.103 [2024-07-14 21:19:40.433749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.103 [2024-07-14 21:19:40.433766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:29.103 [2024-07-14 21:19:40.433779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:29.103 [2024-07-14 21:19:40.433790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.103 [2024-07-14 21:19:40.433838] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:29.103 [2024-07-14 21:19:40.433867] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:29.103 [2024-07-14 21:19:40.433914] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:29.103 [2024-07-14 21:19:40.433936] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:29.103 [2024-07-14 21:19:40.434042] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:29.103 [2024-07-14 21:19:40.434057] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:29.103 [2024-07-14 21:19:40.434072] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:29.103 [2024-07-14 21:19:40.434087] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:29.103 [2024-07-14 21:19:40.434101] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:29.103 [2024-07-14 21:19:40.434114] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:29.103 [2024-07-14 21:19:40.434129] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:29.103 [2024-07-14 21:19:40.434140] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:29.103 [2024-07-14 21:19:40.434151] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:29.103 [2024-07-14 21:19:40.434163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.103 [2024-07-14 21:19:40.434174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:29.103 [2024-07-14 21:19:40.434186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:20:29.103 [2024-07-14 21:19:40.434197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.103 [2024-07-14 21:19:40.434294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.103 [2024-07-14 21:19:40.434310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:29.103 [2024-07-14 21:19:40.434322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:29.103 [2024-07-14 21:19:40.434338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.103 [2024-07-14 21:19:40.434446] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:29.103 [2024-07-14 21:19:40.434474] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:29.103 [2024-07-14 21:19:40.434489] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:29.103 [2024-07-14 21:19:40.434500] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.103 [2024-07-14 21:19:40.434511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:29.103 [2024-07-14 21:19:40.434522] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:29.103 [2024-07-14 21:19:40.434533] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:29.103 [2024-07-14 21:19:40.434543] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:29.103 [2024-07-14 21:19:40.434554] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:29.103 [2024-07-14 21:19:40.434564] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:29.103 [2024-07-14 21:19:40.434574] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:29.103 [2024-07-14 21:19:40.434583] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:29.103 [2024-07-14 21:19:40.434594] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:29.103 [2024-07-14 21:19:40.434604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:29.103 [2024-07-14 21:19:40.434615] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:29.103 [2024-07-14 21:19:40.434625] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.103 [2024-07-14 21:19:40.434635] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:29.103 [2024-07-14 21:19:40.434646] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:29.103 [2024-07-14 21:19:40.434670] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.103 [2024-07-14 21:19:40.434681] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:29.103 [2024-07-14 21:19:40.434691] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:29.103 [2024-07-14 21:19:40.434701] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.103 [2024-07-14 21:19:40.434712] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:29.103 [2024-07-14 21:19:40.434721] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:29.103 [2024-07-14 21:19:40.434731] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.103 [2024-07-14 21:19:40.434741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:29.103 [2024-07-14 21:19:40.434751] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:29.103 [2024-07-14 21:19:40.434762] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.103 [2024-07-14 21:19:40.434772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:29.103 [2024-07-14 21:19:40.434782] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:29.103 [2024-07-14 21:19:40.434791] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.103 [2024-07-14 21:19:40.434822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:29.103 [2024-07-14 21:19:40.434834] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:29.103 [2024-07-14 21:19:40.434844] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:29.103 [2024-07-14 21:19:40.434854] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:29.103 [2024-07-14 21:19:40.434864] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:29.104 [2024-07-14 21:19:40.434874] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:29.104 [2024-07-14 21:19:40.434884] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:29.104 [2024-07-14 21:19:40.434895] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:29.104 [2024-07-14 21:19:40.434906] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.104 [2024-07-14 21:19:40.434916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:29.104 [2024-07-14 21:19:40.434925] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:29.104 [2024-07-14 21:19:40.434936] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.104 [2024-07-14 21:19:40.434945] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:29.104 [2024-07-14 21:19:40.434956] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:29.104 [2024-07-14 21:19:40.434968] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:29.104 [2024-07-14 21:19:40.434978] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.104 [2024-07-14 21:19:40.434990] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:29.104 [2024-07-14 21:19:40.435000] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:29.104 [2024-07-14 21:19:40.435010] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:29.104 [2024-07-14 21:19:40.435021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:29.104 [2024-07-14 21:19:40.435031] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:29.104 [2024-07-14 21:19:40.435041] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:29.104 [2024-07-14 21:19:40.435053] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:29.104 [2024-07-14 21:19:40.435073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:29.104 [2024-07-14 21:19:40.435086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:29.104 [2024-07-14 21:19:40.435097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:29.104 [2024-07-14 21:19:40.435109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:29.104 [2024-07-14 21:19:40.435142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:29.104 [2024-07-14 21:19:40.435157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:29.104 [2024-07-14 21:19:40.435169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:29.104 [2024-07-14 21:19:40.435180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:29.104 [2024-07-14 21:19:40.435191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:29.104 [2024-07-14 21:19:40.435202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:29.104 [2024-07-14 21:19:40.435214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:29.104 [2024-07-14 21:19:40.435225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:29.104 [2024-07-14 21:19:40.435236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:29.104 [2024-07-14 21:19:40.435247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:29.104 [2024-07-14 21:19:40.435259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:29.104 [2024-07-14 21:19:40.435270] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:29.104 [2024-07-14 21:19:40.435283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:29.104 [2024-07-14 21:19:40.435295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:29.104 [2024-07-14 21:19:40.435306] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:29.104 [2024-07-14 21:19:40.435318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:29.104 [2024-07-14 21:19:40.435330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:29.104 [2024-07-14 21:19:40.435343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.104 [2024-07-14 21:19:40.435354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:29.104 [2024-07-14 21:19:40.435365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:20:29.104 [2024-07-14 21:19:40.435376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.104 [2024-07-14 21:19:40.474723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.104 [2024-07-14 21:19:40.474779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:29.104 [2024-07-14 21:19:40.474812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.249 ms 00:20:29.104 [2024-07-14 21:19:40.474827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.104 [2024-07-14 21:19:40.475028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.104 [2024-07-14 21:19:40.475048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:29.104 [2024-07-14 21:19:40.475062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:29.104 [2024-07-14 21:19:40.475080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.104 [2024-07-14 21:19:40.513981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.104 [2024-07-14 21:19:40.514048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:29.104 [2024-07-14 21:19:40.514082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.867 ms 00:20:29.104 [2024-07-14 21:19:40.514093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.104 [2024-07-14 21:19:40.514218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.104 [2024-07-14 21:19:40.514237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:29.104 [2024-07-14 21:19:40.514250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:29.104 [2024-07-14 21:19:40.514260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.104 [2024-07-14 21:19:40.514596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.104 [2024-07-14 21:19:40.514624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:29.104 [2024-07-14 21:19:40.514638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:20:29.104 [2024-07-14 21:19:40.514649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.104 [2024-07-14 21:19:40.514818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.104 [2024-07-14 21:19:40.514845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:29.104 [2024-07-14 21:19:40.514859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:20:29.104 [2024-07-14 21:19:40.514870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.104 [2024-07-14 21:19:40.530826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.104 [2024-07-14 21:19:40.530889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:29.104 [2024-07-14 21:19:40.530907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.925 ms 00:20:29.104 [2024-07-14 21:19:40.530919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.104 [2024-07-14 21:19:40.546197] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:29.104 [2024-07-14 21:19:40.546255] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:29.104 [2024-07-14 21:19:40.546289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.104 [2024-07-14 21:19:40.546301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:29.104 [2024-07-14 21:19:40.546313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.208 ms 00:20:29.104 [2024-07-14 21:19:40.546324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.104 [2024-07-14 21:19:40.574559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.104 [2024-07-14 21:19:40.574632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:29.104 [2024-07-14 21:19:40.574666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.143 ms 00:20:29.104 [2024-07-14 21:19:40.574678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.104 [2024-07-14 21:19:40.589698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.104 [2024-07-14 21:19:40.589752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:29.104 [2024-07-14 21:19:40.589783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.926 ms 00:20:29.104 [2024-07-14 21:19:40.589794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.104 [2024-07-14 21:19:40.604449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.104 [2024-07-14 21:19:40.604505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:29.104 [2024-07-14 21:19:40.604521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.561 ms 00:20:29.104 [2024-07-14 21:19:40.604532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.104 [2024-07-14 21:19:40.605359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.104 [2024-07-14 21:19:40.605410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:29.104 [2024-07-14 21:19:40.605441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:20:29.104 [2024-07-14 21:19:40.605452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.363 [2024-07-14 21:19:40.674433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.363 [2024-07-14 21:19:40.674524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:29.363 [2024-07-14 21:19:40.674559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.946 ms 00:20:29.363 [2024-07-14 21:19:40.674570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.363 [2024-07-14 21:19:40.686197] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:29.363 [2024-07-14 21:19:40.699089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.363 [2024-07-14 21:19:40.699170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:29.363 [2024-07-14 21:19:40.699204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.373 ms 00:20:29.363 [2024-07-14 21:19:40.699215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.363 [2024-07-14 21:19:40.699349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.363 [2024-07-14 21:19:40.699368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:29.363 [2024-07-14 21:19:40.699385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:29.363 [2024-07-14 21:19:40.699396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.363 [2024-07-14 21:19:40.699462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.363 [2024-07-14 21:19:40.699493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:29.363 [2024-07-14 21:19:40.699505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:29.363 [2024-07-14 21:19:40.699532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.363 [2024-07-14 21:19:40.699564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.363 [2024-07-14 21:19:40.699578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:29.363 [2024-07-14 21:19:40.699591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:29.363 [2024-07-14 21:19:40.699607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.363 [2024-07-14 21:19:40.699643] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:29.363 [2024-07-14 21:19:40.699659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.363 [2024-07-14 21:19:40.699671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:29.363 [2024-07-14 21:19:40.699683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:29.363 [2024-07-14 21:19:40.699694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.363 [2024-07-14 21:19:40.728376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.363 [2024-07-14 21:19:40.728456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:29.363 [2024-07-14 21:19:40.728481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.652 ms 00:20:29.363 [2024-07-14 21:19:40.728493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.363 [2024-07-14 21:19:40.728620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.363 [2024-07-14 21:19:40.728641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:29.363 [2024-07-14 21:19:40.728654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:29.363 [2024-07-14 21:19:40.728664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.363 [2024-07-14 21:19:40.729752] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:29.363 [2024-07-14 21:19:40.733784] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 327.192 ms, result 0 00:20:29.363 [2024-07-14 21:19:40.734586] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:29.363 [2024-07-14 21:19:40.750163] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:39.626  Copying: 27/256 [MB] (27 MBps) Copying: 51/256 [MB] (24 MBps) Copying: 76/256 [MB] (25 MBps) Copying: 102/256 [MB] (25 MBps) Copying: 127/256 [MB] (25 MBps) Copying: 151/256 [MB] (24 MBps) Copying: 176/256 [MB] (24 MBps) Copying: 201/256 [MB] (25 MBps) Copying: 226/256 [MB] (25 MBps) Copying: 251/256 [MB] (24 MBps) Copying: 256/256 [MB] (average 25 MBps)[2024-07-14 21:19:50.946144] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:39.626 [2024-07-14 21:19:50.958501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.626 [2024-07-14 21:19:50.958575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:39.626 [2024-07-14 21:19:50.958612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:39.626 [2024-07-14 21:19:50.958624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.626 [2024-07-14 21:19:50.958655] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:39.626 [2024-07-14 21:19:50.961953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.626 [2024-07-14 21:19:50.962007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:39.626 [2024-07-14 21:19:50.962037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.277 ms 00:20:39.626 [2024-07-14 21:19:50.962048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.626 [2024-07-14 21:19:50.962329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.626 [2024-07-14 21:19:50.962354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:39.626 [2024-07-14 21:19:50.962367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:20:39.626 [2024-07-14 21:19:50.962378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.626 [2024-07-14 21:19:50.966038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.626 [2024-07-14 21:19:50.966081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:39.626 [2024-07-14 21:19:50.966110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.637 ms 00:20:39.626 [2024-07-14 21:19:50.966127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.626 [2024-07-14 21:19:50.973309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.626 [2024-07-14 21:19:50.973352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:39.626 [2024-07-14 21:19:50.973381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.159 ms 00:20:39.626 [2024-07-14 21:19:50.973392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.626 [2024-07-14 21:19:51.003231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.626 [2024-07-14 21:19:51.003288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:39.626 [2024-07-14 21:19:51.003320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.774 ms 00:20:39.626 [2024-07-14 21:19:51.003331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.626 [2024-07-14 21:19:51.020488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.626 [2024-07-14 21:19:51.020547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:39.626 [2024-07-14 21:19:51.020565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.093 ms 00:20:39.626 [2024-07-14 21:19:51.020577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.626 [2024-07-14 21:19:51.020783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.626 [2024-07-14 21:19:51.020828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:39.626 [2024-07-14 21:19:51.020843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:20:39.626 [2024-07-14 21:19:51.020855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.626 [2024-07-14 21:19:51.051157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.626 [2024-07-14 21:19:51.051212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:39.626 [2024-07-14 21:19:51.051257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.277 ms 00:20:39.626 [2024-07-14 21:19:51.051268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.626 [2024-07-14 21:19:51.081165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.626 [2024-07-14 21:19:51.081234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:39.626 [2024-07-14 21:19:51.081265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.833 ms 00:20:39.626 [2024-07-14 21:19:51.081275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.626 [2024-07-14 21:19:51.110509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.626 [2024-07-14 21:19:51.110563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:39.626 [2024-07-14 21:19:51.110594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.172 ms 00:20:39.626 [2024-07-14 21:19:51.110605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.626 [2024-07-14 21:19:51.139910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.626 [2024-07-14 21:19:51.139964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:39.626 [2024-07-14 21:19:51.139995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.211 ms 00:20:39.626 [2024-07-14 21:19:51.140006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.626 [2024-07-14 21:19:51.140071] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:39.626 [2024-07-14 21:19:51.140096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:39.626 [2024-07-14 21:19:51.140677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.140997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:39.627 [2024-07-14 21:19:51.141321] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:39.627 [2024-07-14 21:19:51.141332] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b1692d4e-1846-41c9-a805-9c9f076300af 00:20:39.627 [2024-07-14 21:19:51.141344] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:39.627 [2024-07-14 21:19:51.141355] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:39.627 [2024-07-14 21:19:51.141378] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:39.627 [2024-07-14 21:19:51.141390] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:39.627 [2024-07-14 21:19:51.141400] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:39.627 [2024-07-14 21:19:51.141412] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:39.627 [2024-07-14 21:19:51.141423] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:39.627 [2024-07-14 21:19:51.141433] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:39.627 [2024-07-14 21:19:51.141443] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:39.627 [2024-07-14 21:19:51.141454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.627 [2024-07-14 21:19:51.141466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:39.627 [2024-07-14 21:19:51.141479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.385 ms 00:20:39.627 [2024-07-14 21:19:51.141494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.627 [2024-07-14 21:19:51.157655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.627 [2024-07-14 21:19:51.157705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:39.627 [2024-07-14 21:19:51.157735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.134 ms 00:20:39.627 [2024-07-14 21:19:51.157746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.627 [2024-07-14 21:19:51.158239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.627 [2024-07-14 21:19:51.158271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:39.627 [2024-07-14 21:19:51.158292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:20:39.627 [2024-07-14 21:19:51.158304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.887 [2024-07-14 21:19:51.198425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.887 [2024-07-14 21:19:51.198498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:39.887 [2024-07-14 21:19:51.198531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.887 [2024-07-14 21:19:51.198542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.887 [2024-07-14 21:19:51.198644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.887 [2024-07-14 21:19:51.198661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:39.887 [2024-07-14 21:19:51.198679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.887 [2024-07-14 21:19:51.198689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.887 [2024-07-14 21:19:51.198765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.887 [2024-07-14 21:19:51.198798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:39.887 [2024-07-14 21:19:51.198811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.887 [2024-07-14 21:19:51.198823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.887 [2024-07-14 21:19:51.198861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.887 [2024-07-14 21:19:51.198877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:39.887 [2024-07-14 21:19:51.198894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.887 [2024-07-14 21:19:51.198910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.887 [2024-07-14 21:19:51.291153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.887 [2024-07-14 21:19:51.291245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:39.887 [2024-07-14 21:19:51.291289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.887 [2024-07-14 21:19:51.291300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.887 [2024-07-14 21:19:51.371172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.887 [2024-07-14 21:19:51.371239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:39.887 [2024-07-14 21:19:51.371258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.887 [2024-07-14 21:19:51.371276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.887 [2024-07-14 21:19:51.371361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.887 [2024-07-14 21:19:51.371377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:39.887 [2024-07-14 21:19:51.371390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.887 [2024-07-14 21:19:51.371401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.887 [2024-07-14 21:19:51.371436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.887 [2024-07-14 21:19:51.371449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:39.887 [2024-07-14 21:19:51.371460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.887 [2024-07-14 21:19:51.371471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.887 [2024-07-14 21:19:51.371596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.887 [2024-07-14 21:19:51.371616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:39.887 [2024-07-14 21:19:51.371629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.887 [2024-07-14 21:19:51.371640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.887 [2024-07-14 21:19:51.371689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.887 [2024-07-14 21:19:51.371706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:39.887 [2024-07-14 21:19:51.371719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.887 [2024-07-14 21:19:51.371730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.887 [2024-07-14 21:19:51.371780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.887 [2024-07-14 21:19:51.371816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:39.887 [2024-07-14 21:19:51.371832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.887 [2024-07-14 21:19:51.371844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.887 [2024-07-14 21:19:51.371903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.887 [2024-07-14 21:19:51.371919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:39.887 [2024-07-14 21:19:51.371930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.887 [2024-07-14 21:19:51.371941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.887 [2024-07-14 21:19:51.372105] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 413.596 ms, result 0 00:20:41.266 00:20:41.266 00:20:41.266 21:19:52 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:41.266 21:19:52 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:41.525 21:19:52 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:41.525 [2024-07-14 21:19:53.067084] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:41.525 [2024-07-14 21:19:53.067243] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80559 ] 00:20:41.801 [2024-07-14 21:19:53.237939] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.067 [2024-07-14 21:19:53.417265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.325 [2024-07-14 21:19:53.707687] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:42.325 [2024-07-14 21:19:53.707773] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:42.325 [2024-07-14 21:19:53.868486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.325 [2024-07-14 21:19:53.868539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:42.325 [2024-07-14 21:19:53.868558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:42.325 [2024-07-14 21:19:53.868570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.584 [2024-07-14 21:19:53.872040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.584 [2024-07-14 21:19:53.872078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:42.584 [2024-07-14 21:19:53.872094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.441 ms 00:20:42.584 [2024-07-14 21:19:53.872106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.584 [2024-07-14 21:19:53.872236] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:42.584 [2024-07-14 21:19:53.873252] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:42.584 [2024-07-14 21:19:53.873289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.584 [2024-07-14 21:19:53.873302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:42.584 [2024-07-14 21:19:53.873315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.063 ms 00:20:42.584 [2024-07-14 21:19:53.873326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.584 [2024-07-14 21:19:53.874566] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:42.584 [2024-07-14 21:19:53.890213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.584 [2024-07-14 21:19:53.890264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:42.584 [2024-07-14 21:19:53.890284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.648 ms 00:20:42.584 [2024-07-14 21:19:53.890295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.584 [2024-07-14 21:19:53.890416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.585 [2024-07-14 21:19:53.890437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:42.585 [2024-07-14 21:19:53.890449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:42.585 [2024-07-14 21:19:53.890459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.585 [2024-07-14 21:19:53.894834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.585 [2024-07-14 21:19:53.894890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:42.585 [2024-07-14 21:19:53.894904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.264 ms 00:20:42.585 [2024-07-14 21:19:53.894915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.585 [2024-07-14 21:19:53.895043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.585 [2024-07-14 21:19:53.895068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:42.585 [2024-07-14 21:19:53.895081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:20:42.585 [2024-07-14 21:19:53.895091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.585 [2024-07-14 21:19:53.895134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.585 [2024-07-14 21:19:53.895149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:42.585 [2024-07-14 21:19:53.895160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:42.585 [2024-07-14 21:19:53.895174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.585 [2024-07-14 21:19:53.895209] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:42.585 [2024-07-14 21:19:53.899298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.585 [2024-07-14 21:19:53.899327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:42.585 [2024-07-14 21:19:53.899358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.101 ms 00:20:42.585 [2024-07-14 21:19:53.899368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.585 [2024-07-14 21:19:53.899430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.585 [2024-07-14 21:19:53.899447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:42.585 [2024-07-14 21:19:53.899459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:42.585 [2024-07-14 21:19:53.899468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.585 [2024-07-14 21:19:53.899493] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:42.585 [2024-07-14 21:19:53.899518] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:42.585 [2024-07-14 21:19:53.899560] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:42.585 [2024-07-14 21:19:53.899595] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:42.585 [2024-07-14 21:19:53.899733] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:42.585 [2024-07-14 21:19:53.899748] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:42.585 [2024-07-14 21:19:53.899762] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:42.585 [2024-07-14 21:19:53.899777] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:42.585 [2024-07-14 21:19:53.899790] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:42.585 [2024-07-14 21:19:53.899802] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:42.585 [2024-07-14 21:19:53.899818] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:42.585 [2024-07-14 21:19:53.899828] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:42.585 [2024-07-14 21:19:53.899839] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:42.585 [2024-07-14 21:19:53.899853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.585 [2024-07-14 21:19:53.899881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:42.585 [2024-07-14 21:19:53.899896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:20:42.585 [2024-07-14 21:19:53.899908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.585 [2024-07-14 21:19:53.900011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.585 [2024-07-14 21:19:53.900028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:42.585 [2024-07-14 21:19:53.900042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:42.585 [2024-07-14 21:19:53.900059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.585 [2024-07-14 21:19:53.900169] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:42.585 [2024-07-14 21:19:53.900192] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:42.585 [2024-07-14 21:19:53.900206] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:42.585 [2024-07-14 21:19:53.900218] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.585 [2024-07-14 21:19:53.900229] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:42.585 [2024-07-14 21:19:53.900241] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:42.585 [2024-07-14 21:19:53.900252] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:42.585 [2024-07-14 21:19:53.900262] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:42.585 [2024-07-14 21:19:53.900273] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:42.585 [2024-07-14 21:19:53.900283] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:42.585 [2024-07-14 21:19:53.900294] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:42.585 [2024-07-14 21:19:53.900304] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:42.585 [2024-07-14 21:19:53.900315] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:42.585 [2024-07-14 21:19:53.900325] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:42.585 [2024-07-14 21:19:53.900336] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:42.585 [2024-07-14 21:19:53.900346] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.585 [2024-07-14 21:19:53.900356] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:42.585 [2024-07-14 21:19:53.900366] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:42.585 [2024-07-14 21:19:53.900400] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.585 [2024-07-14 21:19:53.900412] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:42.585 [2024-07-14 21:19:53.900423] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:42.585 [2024-07-14 21:19:53.900433] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.585 [2024-07-14 21:19:53.900443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:42.585 [2024-07-14 21:19:53.900454] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:42.585 [2024-07-14 21:19:53.900465] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.585 [2024-07-14 21:19:53.900475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:42.585 [2024-07-14 21:19:53.900485] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:42.585 [2024-07-14 21:19:53.900495] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.585 [2024-07-14 21:19:53.900505] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:42.585 [2024-07-14 21:19:53.900516] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:42.585 [2024-07-14 21:19:53.900526] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.585 [2024-07-14 21:19:53.900536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:42.585 [2024-07-14 21:19:53.900546] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:42.585 [2024-07-14 21:19:53.900556] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:42.585 [2024-07-14 21:19:53.900567] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:42.585 [2024-07-14 21:19:53.900577] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:42.585 [2024-07-14 21:19:53.900587] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:42.585 [2024-07-14 21:19:53.900600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:42.585 [2024-07-14 21:19:53.900611] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:42.585 [2024-07-14 21:19:53.900622] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.585 [2024-07-14 21:19:53.900632] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:42.585 [2024-07-14 21:19:53.900642] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:42.585 [2024-07-14 21:19:53.900652] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.585 [2024-07-14 21:19:53.900662] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:42.585 [2024-07-14 21:19:53.900674] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:42.585 [2024-07-14 21:19:53.900685] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:42.585 [2024-07-14 21:19:53.900696] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.585 [2024-07-14 21:19:53.900707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:42.585 [2024-07-14 21:19:53.900718] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:42.585 [2024-07-14 21:19:53.900728] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:42.585 [2024-07-14 21:19:53.900754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:42.585 [2024-07-14 21:19:53.900764] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:42.585 [2024-07-14 21:19:53.900774] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:42.585 [2024-07-14 21:19:53.900785] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:42.585 [2024-07-14 21:19:53.900803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:42.585 [2024-07-14 21:19:53.900858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:42.585 [2024-07-14 21:19:53.900871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:42.585 [2024-07-14 21:19:53.900883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:42.585 [2024-07-14 21:19:53.900893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:42.585 [2024-07-14 21:19:53.900904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:42.585 [2024-07-14 21:19:53.900915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:42.585 [2024-07-14 21:19:53.900926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:42.585 [2024-07-14 21:19:53.900936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:42.585 [2024-07-14 21:19:53.900947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:42.585 [2024-07-14 21:19:53.900958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:42.585 [2024-07-14 21:19:53.900969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:42.585 [2024-07-14 21:19:53.900980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:42.585 [2024-07-14 21:19:53.900992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:42.585 [2024-07-14 21:19:53.901003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:42.585 [2024-07-14 21:19:53.901015] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:42.585 [2024-07-14 21:19:53.901028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:42.585 [2024-07-14 21:19:53.901040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:42.585 [2024-07-14 21:19:53.901051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:42.585 [2024-07-14 21:19:53.901062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:42.585 [2024-07-14 21:19:53.901073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:42.585 [2024-07-14 21:19:53.901085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.585 [2024-07-14 21:19:53.901096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:42.585 [2024-07-14 21:19:53.901107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:20:42.585 [2024-07-14 21:19:53.901118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.585 [2024-07-14 21:19:53.938989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.585 [2024-07-14 21:19:53.939045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:42.585 [2024-07-14 21:19:53.939066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.775 ms 00:20:42.585 [2024-07-14 21:19:53.939078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.585 [2024-07-14 21:19:53.939310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.585 [2024-07-14 21:19:53.939331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:42.585 [2024-07-14 21:19:53.939343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:20:42.585 [2024-07-14 21:19:53.939360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.585 [2024-07-14 21:19:53.978316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.585 [2024-07-14 21:19:53.978384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:42.585 [2024-07-14 21:19:53.978402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.923 ms 00:20:42.585 [2024-07-14 21:19:53.978414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.585 [2024-07-14 21:19:53.978556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.585 [2024-07-14 21:19:53.978574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:42.585 [2024-07-14 21:19:53.978587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:42.585 [2024-07-14 21:19:53.978598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.585 [2024-07-14 21:19:53.978965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.585 [2024-07-14 21:19:53.978990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:42.585 [2024-07-14 21:19:53.979005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:20:42.586 [2024-07-14 21:19:53.979016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.586 [2024-07-14 21:19:53.979174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.586 [2024-07-14 21:19:53.979195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:42.586 [2024-07-14 21:19:53.979208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:20:42.586 [2024-07-14 21:19:53.979219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.586 [2024-07-14 21:19:53.995333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.586 [2024-07-14 21:19:53.995389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:42.586 [2024-07-14 21:19:53.995405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.085 ms 00:20:42.586 [2024-07-14 21:19:53.995417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.586 [2024-07-14 21:19:54.011577] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:42.586 [2024-07-14 21:19:54.011629] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:42.586 [2024-07-14 21:19:54.011646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.586 [2024-07-14 21:19:54.011657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:42.586 [2024-07-14 21:19:54.011669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.077 ms 00:20:42.586 [2024-07-14 21:19:54.011680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.586 [2024-07-14 21:19:54.040710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.586 [2024-07-14 21:19:54.040776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:42.586 [2024-07-14 21:19:54.040806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.938 ms 00:20:42.586 [2024-07-14 21:19:54.040818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.586 [2024-07-14 21:19:54.056242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.586 [2024-07-14 21:19:54.056293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:42.586 [2024-07-14 21:19:54.056309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.311 ms 00:20:42.586 [2024-07-14 21:19:54.056320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.586 [2024-07-14 21:19:54.071645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.586 [2024-07-14 21:19:54.071696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:42.586 [2024-07-14 21:19:54.071711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.214 ms 00:20:42.586 [2024-07-14 21:19:54.071722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.586 [2024-07-14 21:19:54.072560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.586 [2024-07-14 21:19:54.072592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:42.586 [2024-07-14 21:19:54.072607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:20:42.586 [2024-07-14 21:19:54.072619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.845 [2024-07-14 21:19:54.144095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.845 [2024-07-14 21:19:54.144180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:42.845 [2024-07-14 21:19:54.144199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.443 ms 00:20:42.845 [2024-07-14 21:19:54.144210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.845 [2024-07-14 21:19:54.156475] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:42.845 [2024-07-14 21:19:54.169755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.845 [2024-07-14 21:19:54.169837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:42.845 [2024-07-14 21:19:54.169856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.370 ms 00:20:42.845 [2024-07-14 21:19:54.169867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.845 [2024-07-14 21:19:54.169995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.845 [2024-07-14 21:19:54.170013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:42.845 [2024-07-14 21:19:54.170029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:42.845 [2024-07-14 21:19:54.170040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.845 [2024-07-14 21:19:54.170103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.845 [2024-07-14 21:19:54.170118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:42.845 [2024-07-14 21:19:54.170146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:42.845 [2024-07-14 21:19:54.170171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.845 [2024-07-14 21:19:54.170220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.845 [2024-07-14 21:19:54.170234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:42.845 [2024-07-14 21:19:54.170247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:42.845 [2024-07-14 21:19:54.170263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.845 [2024-07-14 21:19:54.170300] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:42.845 [2024-07-14 21:19:54.170315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.845 [2024-07-14 21:19:54.170327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:42.845 [2024-07-14 21:19:54.170338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:42.845 [2024-07-14 21:19:54.170349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.845 [2024-07-14 21:19:54.200449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.845 [2024-07-14 21:19:54.200504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:42.845 [2024-07-14 21:19:54.200528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.069 ms 00:20:42.845 [2024-07-14 21:19:54.200540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.845 [2024-07-14 21:19:54.200666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.845 [2024-07-14 21:19:54.200687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:42.845 [2024-07-14 21:19:54.200701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:42.845 [2024-07-14 21:19:54.200712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.845 [2024-07-14 21:19:54.201735] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:42.845 [2024-07-14 21:19:54.205797] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 332.884 ms, result 0 00:20:42.845 [2024-07-14 21:19:54.206764] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:42.845 [2024-07-14 21:19:54.222605] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:43.106  Copying: 4096/4096 [kB] (average 24 MBps)[2024-07-14 21:19:54.392046] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:43.106 [2024-07-14 21:19:54.404369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.106 [2024-07-14 21:19:54.404418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:43.106 [2024-07-14 21:19:54.404437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:43.106 [2024-07-14 21:19:54.404449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.106 [2024-07-14 21:19:54.404480] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:43.106 [2024-07-14 21:19:54.407771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.106 [2024-07-14 21:19:54.407832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:43.106 [2024-07-14 21:19:54.407848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.269 ms 00:20:43.106 [2024-07-14 21:19:54.407859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.106 [2024-07-14 21:19:54.409670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.106 [2024-07-14 21:19:54.409738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:43.106 [2024-07-14 21:19:54.409753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.768 ms 00:20:43.106 [2024-07-14 21:19:54.409763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.106 [2024-07-14 21:19:54.414053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.106 [2024-07-14 21:19:54.414087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:43.106 [2024-07-14 21:19:54.414102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.267 ms 00:20:43.106 [2024-07-14 21:19:54.414120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.106 [2024-07-14 21:19:54.421849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.106 [2024-07-14 21:19:54.421903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:43.106 [2024-07-14 21:19:54.421918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.687 ms 00:20:43.106 [2024-07-14 21:19:54.421929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.106 [2024-07-14 21:19:54.452801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.106 [2024-07-14 21:19:54.452877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:43.106 [2024-07-14 21:19:54.452894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.791 ms 00:20:43.106 [2024-07-14 21:19:54.452905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.106 [2024-07-14 21:19:54.470187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.107 [2024-07-14 21:19:54.470254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:43.107 [2024-07-14 21:19:54.470270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.209 ms 00:20:43.107 [2024-07-14 21:19:54.470282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.107 [2024-07-14 21:19:54.470454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.107 [2024-07-14 21:19:54.470475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:43.107 [2024-07-14 21:19:54.470487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:20:43.107 [2024-07-14 21:19:54.470498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.107 [2024-07-14 21:19:54.501247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.107 [2024-07-14 21:19:54.501298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:43.107 [2024-07-14 21:19:54.501314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.728 ms 00:20:43.107 [2024-07-14 21:19:54.501324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.107 [2024-07-14 21:19:54.531908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.107 [2024-07-14 21:19:54.531959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:43.107 [2024-07-14 21:19:54.531975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.521 ms 00:20:43.107 [2024-07-14 21:19:54.531985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.107 [2024-07-14 21:19:54.562047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.107 [2024-07-14 21:19:54.562099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:43.107 [2024-07-14 21:19:54.562115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.998 ms 00:20:43.107 [2024-07-14 21:19:54.562126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.107 [2024-07-14 21:19:54.592141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.107 [2024-07-14 21:19:54.592209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:43.107 [2024-07-14 21:19:54.592240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.905 ms 00:20:43.107 [2024-07-14 21:19:54.592257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.107 [2024-07-14 21:19:54.592333] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:43.107 [2024-07-14 21:19:54.592356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.592997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.593008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.593019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.593030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.593042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.593053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:43.107 [2024-07-14 21:19:54.593064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:43.108 [2024-07-14 21:19:54.593610] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:43.108 [2024-07-14 21:19:54.593621] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b1692d4e-1846-41c9-a805-9c9f076300af 00:20:43.108 [2024-07-14 21:19:54.593632] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:43.108 [2024-07-14 21:19:54.593642] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:43.108 [2024-07-14 21:19:54.593665] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:43.108 [2024-07-14 21:19:54.593677] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:43.108 [2024-07-14 21:19:54.593687] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:43.108 [2024-07-14 21:19:54.593697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:43.108 [2024-07-14 21:19:54.593708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:43.108 [2024-07-14 21:19:54.593718] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:43.108 [2024-07-14 21:19:54.593727] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:43.108 [2024-07-14 21:19:54.593737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.108 [2024-07-14 21:19:54.593749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:43.108 [2024-07-14 21:19:54.593761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.407 ms 00:20:43.108 [2024-07-14 21:19:54.593776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.108 [2024-07-14 21:19:54.609982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.108 [2024-07-14 21:19:54.610031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:43.108 [2024-07-14 21:19:54.610047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.181 ms 00:20:43.108 [2024-07-14 21:19:54.610059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.108 [2024-07-14 21:19:54.610497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.108 [2024-07-14 21:19:54.610518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:43.108 [2024-07-14 21:19:54.610537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:20:43.108 [2024-07-14 21:19:54.610548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.108 [2024-07-14 21:19:54.649107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.108 [2024-07-14 21:19:54.649169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:43.108 [2024-07-14 21:19:54.649184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.108 [2024-07-14 21:19:54.649194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.108 [2024-07-14 21:19:54.649290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.108 [2024-07-14 21:19:54.649305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:43.108 [2024-07-14 21:19:54.649322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.108 [2024-07-14 21:19:54.649332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.108 [2024-07-14 21:19:54.649438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.108 [2024-07-14 21:19:54.649456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:43.108 [2024-07-14 21:19:54.649469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.108 [2024-07-14 21:19:54.649480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.108 [2024-07-14 21:19:54.649504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.109 [2024-07-14 21:19:54.649517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:43.109 [2024-07-14 21:19:54.649528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.109 [2024-07-14 21:19:54.649544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.368 [2024-07-14 21:19:54.741400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.368 [2024-07-14 21:19:54.741476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:43.368 [2024-07-14 21:19:54.741493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.368 [2024-07-14 21:19:54.741503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.368 [2024-07-14 21:19:54.824330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.368 [2024-07-14 21:19:54.824435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:43.368 [2024-07-14 21:19:54.824460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.368 [2024-07-14 21:19:54.824472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.368 [2024-07-14 21:19:54.824564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.368 [2024-07-14 21:19:54.824579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:43.368 [2024-07-14 21:19:54.824591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.368 [2024-07-14 21:19:54.824601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.368 [2024-07-14 21:19:54.824651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.368 [2024-07-14 21:19:54.824663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:43.368 [2024-07-14 21:19:54.824675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.368 [2024-07-14 21:19:54.824687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.368 [2024-07-14 21:19:54.824810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.368 [2024-07-14 21:19:54.824850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:43.368 [2024-07-14 21:19:54.824866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.368 [2024-07-14 21:19:54.824878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.368 [2024-07-14 21:19:54.824929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.368 [2024-07-14 21:19:54.824951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:43.368 [2024-07-14 21:19:54.824965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.368 [2024-07-14 21:19:54.824976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.368 [2024-07-14 21:19:54.825028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.368 [2024-07-14 21:19:54.825044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:43.368 [2024-07-14 21:19:54.825055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.368 [2024-07-14 21:19:54.825067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.368 [2024-07-14 21:19:54.825120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.368 [2024-07-14 21:19:54.825137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:43.368 [2024-07-14 21:19:54.825149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.368 [2024-07-14 21:19:54.825160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.368 [2024-07-14 21:19:54.825326] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 420.954 ms, result 0 00:20:44.304 00:20:44.304 00:20:44.304 21:19:55 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=80594 00:20:44.304 21:19:55 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:44.304 21:19:55 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 80594 00:20:44.304 21:19:55 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 80594 ']' 00:20:44.304 21:19:55 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.304 21:19:55 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:44.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.304 21:19:55 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.304 21:19:55 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:44.304 21:19:55 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:44.562 [2024-07-14 21:19:55.921229] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:44.562 [2024-07-14 21:19:55.921379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80594 ] 00:20:44.562 [2024-07-14 21:19:56.076542] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.820 [2024-07-14 21:19:56.226823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.388 21:19:56 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:45.388 21:19:56 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:20:45.388 21:19:56 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:45.647 [2024-07-14 21:19:57.101489] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:45.647 [2024-07-14 21:19:57.101564] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:45.907 [2024-07-14 21:19:57.278222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.907 [2024-07-14 21:19:57.278301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:45.907 [2024-07-14 21:19:57.278320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:45.907 [2024-07-14 21:19:57.278333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.907 [2024-07-14 21:19:57.281547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.907 [2024-07-14 21:19:57.281602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:45.907 [2024-07-14 21:19:57.281617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.173 ms 00:20:45.907 [2024-07-14 21:19:57.281630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.907 [2024-07-14 21:19:57.281788] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:45.907 [2024-07-14 21:19:57.282783] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:45.907 [2024-07-14 21:19:57.282860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.907 [2024-07-14 21:19:57.282877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:45.907 [2024-07-14 21:19:57.282900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.082 ms 00:20:45.907 [2024-07-14 21:19:57.282914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.907 [2024-07-14 21:19:57.284201] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:45.907 [2024-07-14 21:19:57.301029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.907 [2024-07-14 21:19:57.301100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:45.907 [2024-07-14 21:19:57.301123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.837 ms 00:20:45.907 [2024-07-14 21:19:57.301136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.907 [2024-07-14 21:19:57.301307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.907 [2024-07-14 21:19:57.301326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:45.907 [2024-07-14 21:19:57.301341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:45.907 [2024-07-14 21:19:57.301351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.907 [2024-07-14 21:19:57.305980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.907 [2024-07-14 21:19:57.306035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:45.907 [2024-07-14 21:19:57.306058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.569 ms 00:20:45.907 [2024-07-14 21:19:57.306070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.907 [2024-07-14 21:19:57.306264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.907 [2024-07-14 21:19:57.306284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:45.907 [2024-07-14 21:19:57.306313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:20:45.907 [2024-07-14 21:19:57.306339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.907 [2024-07-14 21:19:57.306383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.907 [2024-07-14 21:19:57.306398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:45.907 [2024-07-14 21:19:57.306411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:45.907 [2024-07-14 21:19:57.306422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.907 [2024-07-14 21:19:57.306458] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:45.907 [2024-07-14 21:19:57.310597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.907 [2024-07-14 21:19:57.310644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:45.907 [2024-07-14 21:19:57.310657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.151 ms 00:20:45.907 [2024-07-14 21:19:57.310669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.907 [2024-07-14 21:19:57.310727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.907 [2024-07-14 21:19:57.310750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:45.907 [2024-07-14 21:19:57.310762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:45.907 [2024-07-14 21:19:57.310776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.907 [2024-07-14 21:19:57.310802] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:45.907 [2024-07-14 21:19:57.310897] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:45.907 [2024-07-14 21:19:57.310946] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:45.907 [2024-07-14 21:19:57.310974] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:45.907 [2024-07-14 21:19:57.311073] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:45.907 [2024-07-14 21:19:57.311092] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:45.908 [2024-07-14 21:19:57.311110] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:45.908 [2024-07-14 21:19:57.311126] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:45.908 [2024-07-14 21:19:57.311140] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:45.908 [2024-07-14 21:19:57.311153] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:45.908 [2024-07-14 21:19:57.311165] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:45.908 [2024-07-14 21:19:57.311177] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:45.908 [2024-07-14 21:19:57.311189] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:45.908 [2024-07-14 21:19:57.311234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.908 [2024-07-14 21:19:57.311259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:45.908 [2024-07-14 21:19:57.311271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:20:45.908 [2024-07-14 21:19:57.311281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.908 [2024-07-14 21:19:57.311386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.908 [2024-07-14 21:19:57.311401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:45.908 [2024-07-14 21:19:57.311413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:45.908 [2024-07-14 21:19:57.311423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.908 [2024-07-14 21:19:57.311524] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:45.908 [2024-07-14 21:19:57.311541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:45.908 [2024-07-14 21:19:57.311553] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:45.908 [2024-07-14 21:19:57.311564] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.908 [2024-07-14 21:19:57.311575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:45.908 [2024-07-14 21:19:57.311584] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:45.908 [2024-07-14 21:19:57.311597] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:45.908 [2024-07-14 21:19:57.311607] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:45.908 [2024-07-14 21:19:57.311620] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:45.908 [2024-07-14 21:19:57.311630] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:45.908 [2024-07-14 21:19:57.311640] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:45.908 [2024-07-14 21:19:57.311649] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:45.908 [2024-07-14 21:19:57.311660] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:45.908 [2024-07-14 21:19:57.311669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:45.908 [2024-07-14 21:19:57.311680] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:45.908 [2024-07-14 21:19:57.311689] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.908 [2024-07-14 21:19:57.311699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:45.908 [2024-07-14 21:19:57.311708] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:45.908 [2024-07-14 21:19:57.311721] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.908 [2024-07-14 21:19:57.311731] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:45.908 [2024-07-14 21:19:57.311742] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:45.908 [2024-07-14 21:19:57.311750] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.908 [2024-07-14 21:19:57.311761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:45.908 [2024-07-14 21:19:57.311770] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:45.908 [2024-07-14 21:19:57.311782] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.908 [2024-07-14 21:19:57.311791] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:45.908 [2024-07-14 21:19:57.311821] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:45.908 [2024-07-14 21:19:57.311874] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.908 [2024-07-14 21:19:57.311888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:45.908 [2024-07-14 21:19:57.311898] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:45.908 [2024-07-14 21:19:57.311910] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.908 [2024-07-14 21:19:57.311921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:45.908 [2024-07-14 21:19:57.311932] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:45.908 [2024-07-14 21:19:57.311942] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:45.908 [2024-07-14 21:19:57.311953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:45.908 [2024-07-14 21:19:57.311964] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:45.908 [2024-07-14 21:19:57.311975] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:45.908 [2024-07-14 21:19:57.311984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:45.908 [2024-07-14 21:19:57.311996] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:45.908 [2024-07-14 21:19:57.312006] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.908 [2024-07-14 21:19:57.312019] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:45.908 [2024-07-14 21:19:57.312029] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:45.908 [2024-07-14 21:19:57.312041] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.908 [2024-07-14 21:19:57.312051] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:45.908 [2024-07-14 21:19:57.312066] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:45.908 [2024-07-14 21:19:57.312077] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:45.908 [2024-07-14 21:19:57.312089] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.908 [2024-07-14 21:19:57.312101] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:45.908 [2024-07-14 21:19:57.312112] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:45.908 [2024-07-14 21:19:57.312122] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:45.908 [2024-07-14 21:19:57.312135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:45.908 [2024-07-14 21:19:57.312145] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:45.908 [2024-07-14 21:19:57.312157] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:45.908 [2024-07-14 21:19:57.312168] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:45.908 [2024-07-14 21:19:57.312183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:45.908 [2024-07-14 21:19:57.312225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:45.908 [2024-07-14 21:19:57.312254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:45.908 [2024-07-14 21:19:57.312264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:45.908 [2024-07-14 21:19:57.312275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:45.908 [2024-07-14 21:19:57.312286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:45.908 [2024-07-14 21:19:57.312297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:45.908 [2024-07-14 21:19:57.312307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:45.908 [2024-07-14 21:19:57.312318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:45.908 [2024-07-14 21:19:57.312328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:45.908 [2024-07-14 21:19:57.312339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:45.908 [2024-07-14 21:19:57.312349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:45.908 [2024-07-14 21:19:57.312360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:45.908 [2024-07-14 21:19:57.312370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:45.908 [2024-07-14 21:19:57.312383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:45.908 [2024-07-14 21:19:57.312436] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:45.908 [2024-07-14 21:19:57.312451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:45.908 [2024-07-14 21:19:57.312465] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:45.908 [2024-07-14 21:19:57.312481] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:45.908 [2024-07-14 21:19:57.312492] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:45.908 [2024-07-14 21:19:57.312506] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:45.908 [2024-07-14 21:19:57.312519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.908 [2024-07-14 21:19:57.312533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:45.908 [2024-07-14 21:19:57.312546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.053 ms 00:20:45.908 [2024-07-14 21:19:57.312565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.908 [2024-07-14 21:19:57.341820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.908 [2024-07-14 21:19:57.341900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:45.908 [2024-07-14 21:19:57.341918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.173 ms 00:20:45.908 [2024-07-14 21:19:57.341934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.908 [2024-07-14 21:19:57.342112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.908 [2024-07-14 21:19:57.342134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:45.908 [2024-07-14 21:19:57.342146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:20:45.908 [2024-07-14 21:19:57.342166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.908 [2024-07-14 21:19:57.375721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.908 [2024-07-14 21:19:57.375788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:45.908 [2024-07-14 21:19:57.375805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.525 ms 00:20:45.908 [2024-07-14 21:19:57.375835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.908 [2024-07-14 21:19:57.375954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.909 [2024-07-14 21:19:57.375975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:45.909 [2024-07-14 21:19:57.376003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:45.909 [2024-07-14 21:19:57.376015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.909 [2024-07-14 21:19:57.376362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.909 [2024-07-14 21:19:57.376415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:45.909 [2024-07-14 21:19:57.376436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:20:45.909 [2024-07-14 21:19:57.376450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.909 [2024-07-14 21:19:57.376602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.909 [2024-07-14 21:19:57.376625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:45.909 [2024-07-14 21:19:57.376638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:20:45.909 [2024-07-14 21:19:57.376652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.909 [2024-07-14 21:19:57.391195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.909 [2024-07-14 21:19:57.391247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:45.909 [2024-07-14 21:19:57.391261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.514 ms 00:20:45.909 [2024-07-14 21:19:57.391273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.909 [2024-07-14 21:19:57.404766] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:45.909 [2024-07-14 21:19:57.404810] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:45.909 [2024-07-14 21:19:57.404858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.909 [2024-07-14 21:19:57.404872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:45.909 [2024-07-14 21:19:57.404884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.464 ms 00:20:45.909 [2024-07-14 21:19:57.404896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.909 [2024-07-14 21:19:57.428588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.909 [2024-07-14 21:19:57.428643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:45.909 [2024-07-14 21:19:57.428658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.594 ms 00:20:45.909 [2024-07-14 21:19:57.428670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.909 [2024-07-14 21:19:57.441690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.909 [2024-07-14 21:19:57.441741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:45.909 [2024-07-14 21:19:57.441764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.926 ms 00:20:45.909 [2024-07-14 21:19:57.441779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.168 [2024-07-14 21:19:57.456815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.168 [2024-07-14 21:19:57.456889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:46.168 [2024-07-14 21:19:57.456905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.934 ms 00:20:46.168 [2024-07-14 21:19:57.456919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.168 [2024-07-14 21:19:57.457770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.168 [2024-07-14 21:19:57.457841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:46.168 [2024-07-14 21:19:57.457857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:20:46.168 [2024-07-14 21:19:57.457870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.168 [2024-07-14 21:19:57.528658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.168 [2024-07-14 21:19:57.528744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:46.168 [2024-07-14 21:19:57.528765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.758 ms 00:20:46.168 [2024-07-14 21:19:57.528778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.168 [2024-07-14 21:19:57.538937] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:46.168 [2024-07-14 21:19:57.551319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.168 [2024-07-14 21:19:57.551391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:46.168 [2024-07-14 21:19:57.551413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.354 ms 00:20:46.168 [2024-07-14 21:19:57.551426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.168 [2024-07-14 21:19:57.551552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.168 [2024-07-14 21:19:57.551570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:46.168 [2024-07-14 21:19:57.551584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:46.168 [2024-07-14 21:19:57.551594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.168 [2024-07-14 21:19:57.551659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.168 [2024-07-14 21:19:57.551673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:46.168 [2024-07-14 21:19:57.551701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:46.168 [2024-07-14 21:19:57.551711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.168 [2024-07-14 21:19:57.551761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.168 [2024-07-14 21:19:57.551774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:46.168 [2024-07-14 21:19:57.551790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:46.168 [2024-07-14 21:19:57.551801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.168 [2024-07-14 21:19:57.551839] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:46.168 [2024-07-14 21:19:57.551893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.168 [2024-07-14 21:19:57.551909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:46.168 [2024-07-14 21:19:57.551921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:46.168 [2024-07-14 21:19:57.551933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.168 [2024-07-14 21:19:57.580839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.168 [2024-07-14 21:19:57.580949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:46.168 [2024-07-14 21:19:57.580970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.875 ms 00:20:46.168 [2024-07-14 21:19:57.580986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.168 [2024-07-14 21:19:57.581113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.168 [2024-07-14 21:19:57.581138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:46.168 [2024-07-14 21:19:57.581151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:46.168 [2024-07-14 21:19:57.581165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.168 [2024-07-14 21:19:57.582298] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:46.168 [2024-07-14 21:19:57.586329] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 303.667 ms, result 0 00:20:46.168 [2024-07-14 21:19:57.587446] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:46.168 Some configs were skipped because the RPC state that can call them passed over. 00:20:46.168 21:19:57 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:46.428 [2024-07-14 21:19:57.880206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.428 [2024-07-14 21:19:57.880273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:46.428 [2024-07-14 21:19:57.880300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.403 ms 00:20:46.428 [2024-07-14 21:19:57.880313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.428 [2024-07-14 21:19:57.880432] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.597 ms, result 0 00:20:46.428 true 00:20:46.428 21:19:57 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:46.688 [2024-07-14 21:19:58.132213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.688 [2024-07-14 21:19:58.132277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:46.688 [2024-07-14 21:19:58.132296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.144 ms 00:20:46.688 [2024-07-14 21:19:58.132310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.688 [2024-07-14 21:19:58.132359] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.292 ms, result 0 00:20:46.688 true 00:20:46.688 21:19:58 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 80594 00:20:46.688 21:19:58 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 80594 ']' 00:20:46.688 21:19:58 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 80594 00:20:46.688 21:19:58 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:20:46.688 21:19:58 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:46.688 21:19:58 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80594 00:20:46.688 21:19:58 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:46.688 21:19:58 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:46.688 killing process with pid 80594 00:20:46.688 21:19:58 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80594' 00:20:46.688 21:19:58 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 80594 00:20:46.688 21:19:58 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 80594 00:20:47.626 [2024-07-14 21:19:59.067447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.626 [2024-07-14 21:19:59.067531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:47.626 [2024-07-14 21:19:59.067553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:47.626 [2024-07-14 21:19:59.067565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.626 [2024-07-14 21:19:59.067613] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:47.626 [2024-07-14 21:19:59.071312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.626 [2024-07-14 21:19:59.071378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:47.626 [2024-07-14 21:19:59.071393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.660 ms 00:20:47.626 [2024-07-14 21:19:59.071408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.626 [2024-07-14 21:19:59.071713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.626 [2024-07-14 21:19:59.071734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:47.626 [2024-07-14 21:19:59.071748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:20:47.626 [2024-07-14 21:19:59.071761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.626 [2024-07-14 21:19:59.076090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.626 [2024-07-14 21:19:59.076132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:47.626 [2024-07-14 21:19:59.076165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.306 ms 00:20:47.626 [2024-07-14 21:19:59.076179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.626 [2024-07-14 21:19:59.083705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.626 [2024-07-14 21:19:59.083754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:47.626 [2024-07-14 21:19:59.083769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.482 ms 00:20:47.626 [2024-07-14 21:19:59.083784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.626 [2024-07-14 21:19:59.096860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.626 [2024-07-14 21:19:59.096922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:47.626 [2024-07-14 21:19:59.096938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.994 ms 00:20:47.626 [2024-07-14 21:19:59.096953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.626 [2024-07-14 21:19:59.105623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.626 [2024-07-14 21:19:59.105677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:47.626 [2024-07-14 21:19:59.105695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.608 ms 00:20:47.626 [2024-07-14 21:19:59.105707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.626 [2024-07-14 21:19:59.105905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.626 [2024-07-14 21:19:59.105932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:47.626 [2024-07-14 21:19:59.105946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:20:47.626 [2024-07-14 21:19:59.105974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.626 [2024-07-14 21:19:59.119057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.626 [2024-07-14 21:19:59.119096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:47.626 [2024-07-14 21:19:59.119111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.057 ms 00:20:47.626 [2024-07-14 21:19:59.119124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.626 [2024-07-14 21:19:59.132070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.626 [2024-07-14 21:19:59.132109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:47.626 [2024-07-14 21:19:59.132125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.887 ms 00:20:47.626 [2024-07-14 21:19:59.132143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.626 [2024-07-14 21:19:59.144364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.626 [2024-07-14 21:19:59.144439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:47.626 [2024-07-14 21:19:59.144456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.160 ms 00:20:47.626 [2024-07-14 21:19:59.144469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.626 [2024-07-14 21:19:59.156892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.626 [2024-07-14 21:19:59.156928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:47.626 [2024-07-14 21:19:59.156943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.324 ms 00:20:47.626 [2024-07-14 21:19:59.156956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.626 [2024-07-14 21:19:59.157013] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:47.626 [2024-07-14 21:19:59.157040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:47.626 [2024-07-14 21:19:59.157054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:47.626 [2024-07-14 21:19:59.157067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:47.626 [2024-07-14 21:19:59.157079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:47.626 [2024-07-14 21:19:59.157092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:47.626 [2024-07-14 21:19:59.157103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:47.626 [2024-07-14 21:19:59.157119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:47.626 [2024-07-14 21:19:59.157130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:47.626 [2024-07-14 21:19:59.157144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:47.626 [2024-07-14 21:19:59.157155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:47.626 [2024-07-14 21:19:59.157168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:47.626 [2024-07-14 21:19:59.157194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:47.626 [2024-07-14 21:19:59.157222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:47.626 [2024-07-14 21:19:59.157233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:47.626 [2024-07-14 21:19:59.157245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:47.626 [2024-07-14 21:19:59.157256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:47.626 [2024-07-14 21:19:59.157270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.157989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:47.627 [2024-07-14 21:19:59.158404] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:47.627 [2024-07-14 21:19:59.158416] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b1692d4e-1846-41c9-a805-9c9f076300af 00:20:47.627 [2024-07-14 21:19:59.158436] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:47.627 [2024-07-14 21:19:59.158447] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:47.627 [2024-07-14 21:19:59.158460] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:47.627 [2024-07-14 21:19:59.158472] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:47.627 [2024-07-14 21:19:59.158485] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:47.627 [2024-07-14 21:19:59.158497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:47.627 [2024-07-14 21:19:59.158510] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:47.627 [2024-07-14 21:19:59.158536] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:47.627 [2024-07-14 21:19:59.158589] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:47.627 [2024-07-14 21:19:59.158600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.628 [2024-07-14 21:19:59.158613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:47.628 [2024-07-14 21:19:59.158625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.590 ms 00:20:47.628 [2024-07-14 21:19:59.158638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.887 [2024-07-14 21:19:59.176018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.887 [2024-07-14 21:19:59.176059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:47.887 [2024-07-14 21:19:59.176076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.342 ms 00:20:47.887 [2024-07-14 21:19:59.176093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.887 [2024-07-14 21:19:59.176611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.887 [2024-07-14 21:19:59.176640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:47.887 [2024-07-14 21:19:59.176659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:20:47.887 [2024-07-14 21:19:59.176677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.887 [2024-07-14 21:19:59.230743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.887 [2024-07-14 21:19:59.230831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:47.887 [2024-07-14 21:19:59.230849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.887 [2024-07-14 21:19:59.230863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.887 [2024-07-14 21:19:59.231011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.887 [2024-07-14 21:19:59.231032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:47.887 [2024-07-14 21:19:59.231044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.887 [2024-07-14 21:19:59.231060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.887 [2024-07-14 21:19:59.231121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.887 [2024-07-14 21:19:59.231142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:47.887 [2024-07-14 21:19:59.231153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.887 [2024-07-14 21:19:59.231168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.887 [2024-07-14 21:19:59.231192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.887 [2024-07-14 21:19:59.231208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:47.887 [2024-07-14 21:19:59.231220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.887 [2024-07-14 21:19:59.231232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.887 [2024-07-14 21:19:59.319541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.887 [2024-07-14 21:19:59.319605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:47.887 [2024-07-14 21:19:59.319623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.887 [2024-07-14 21:19:59.319637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.887 [2024-07-14 21:19:59.405065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.887 [2024-07-14 21:19:59.405135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:47.887 [2024-07-14 21:19:59.405154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.887 [2024-07-14 21:19:59.405167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.887 [2024-07-14 21:19:59.405267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.887 [2024-07-14 21:19:59.405289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:47.887 [2024-07-14 21:19:59.405301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.887 [2024-07-14 21:19:59.405317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.887 [2024-07-14 21:19:59.405351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.887 [2024-07-14 21:19:59.405368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:47.887 [2024-07-14 21:19:59.405380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.887 [2024-07-14 21:19:59.405392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.887 [2024-07-14 21:19:59.405507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.887 [2024-07-14 21:19:59.405528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:47.887 [2024-07-14 21:19:59.405540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.887 [2024-07-14 21:19:59.405553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.887 [2024-07-14 21:19:59.405602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.887 [2024-07-14 21:19:59.405623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:47.887 [2024-07-14 21:19:59.405635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.887 [2024-07-14 21:19:59.405649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.887 [2024-07-14 21:19:59.405695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.887 [2024-07-14 21:19:59.405717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:47.887 [2024-07-14 21:19:59.405728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.887 [2024-07-14 21:19:59.405743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.887 [2024-07-14 21:19:59.405795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.887 [2024-07-14 21:19:59.405848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:47.887 [2024-07-14 21:19:59.405863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.888 [2024-07-14 21:19:59.405877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.888 [2024-07-14 21:19:59.406031] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 338.571 ms, result 0 00:20:48.823 21:20:00 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:48.823 [2024-07-14 21:20:00.345966] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:48.823 [2024-07-14 21:20:00.346162] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80648 ] 00:20:49.082 [2024-07-14 21:20:00.519575] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.340 [2024-07-14 21:20:00.707282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.597 [2024-07-14 21:20:01.011010] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:49.597 [2024-07-14 21:20:01.011113] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:49.857 [2024-07-14 21:20:01.169533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.857 [2024-07-14 21:20:01.169600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:49.857 [2024-07-14 21:20:01.169633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:49.857 [2024-07-14 21:20:01.169643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.857 [2024-07-14 21:20:01.172453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.857 [2024-07-14 21:20:01.172493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:49.857 [2024-07-14 21:20:01.172525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.784 ms 00:20:49.857 [2024-07-14 21:20:01.172535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.857 [2024-07-14 21:20:01.172663] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:49.857 [2024-07-14 21:20:01.173721] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:49.857 [2024-07-14 21:20:01.173773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.857 [2024-07-14 21:20:01.173816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:49.857 [2024-07-14 21:20:01.173859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.120 ms 00:20:49.857 [2024-07-14 21:20:01.173873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.857 [2024-07-14 21:20:01.175177] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:49.857 [2024-07-14 21:20:01.188924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.857 [2024-07-14 21:20:01.188976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:49.857 [2024-07-14 21:20:01.189011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.748 ms 00:20:49.857 [2024-07-14 21:20:01.189021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.857 [2024-07-14 21:20:01.189124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.857 [2024-07-14 21:20:01.189144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:49.857 [2024-07-14 21:20:01.189156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:49.857 [2024-07-14 21:20:01.189165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.857 [2024-07-14 21:20:01.193112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.857 [2024-07-14 21:20:01.193171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:49.857 [2024-07-14 21:20:01.193200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.861 ms 00:20:49.857 [2024-07-14 21:20:01.193224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.857 [2024-07-14 21:20:01.193343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.857 [2024-07-14 21:20:01.193363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:49.857 [2024-07-14 21:20:01.193374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:49.857 [2024-07-14 21:20:01.193383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.857 [2024-07-14 21:20:01.193466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.857 [2024-07-14 21:20:01.193482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:49.857 [2024-07-14 21:20:01.193495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:49.857 [2024-07-14 21:20:01.193509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.857 [2024-07-14 21:20:01.193543] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:49.857 [2024-07-14 21:20:01.197329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.857 [2024-07-14 21:20:01.197375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:49.857 [2024-07-14 21:20:01.197403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.796 ms 00:20:49.857 [2024-07-14 21:20:01.197413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.857 [2024-07-14 21:20:01.197470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.857 [2024-07-14 21:20:01.197487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:49.857 [2024-07-14 21:20:01.197498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:49.857 [2024-07-14 21:20:01.197507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.857 [2024-07-14 21:20:01.197529] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:49.857 [2024-07-14 21:20:01.197554] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:49.857 [2024-07-14 21:20:01.197607] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:49.857 [2024-07-14 21:20:01.197657] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:49.857 [2024-07-14 21:20:01.197750] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:49.857 [2024-07-14 21:20:01.197765] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:49.857 [2024-07-14 21:20:01.197778] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:49.857 [2024-07-14 21:20:01.197792] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:49.857 [2024-07-14 21:20:01.197804] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:49.857 [2024-07-14 21:20:01.197815] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:49.857 [2024-07-14 21:20:01.197828] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:49.857 [2024-07-14 21:20:01.197838] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:49.857 [2024-07-14 21:20:01.197848] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:49.857 [2024-07-14 21:20:01.197875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.857 [2024-07-14 21:20:01.197888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:49.857 [2024-07-14 21:20:01.197898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:20:49.857 [2024-07-14 21:20:01.197909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.857 [2024-07-14 21:20:01.197999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.857 [2024-07-14 21:20:01.198017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:49.857 [2024-07-14 21:20:01.198029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:49.857 [2024-07-14 21:20:01.198043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.857 [2024-07-14 21:20:01.198143] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:49.857 [2024-07-14 21:20:01.198161] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:49.857 [2024-07-14 21:20:01.198173] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:49.857 [2024-07-14 21:20:01.198183] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.857 [2024-07-14 21:20:01.198193] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:49.857 [2024-07-14 21:20:01.198202] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:49.857 [2024-07-14 21:20:01.198212] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:49.857 [2024-07-14 21:20:01.198221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:49.857 [2024-07-14 21:20:01.198231] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:49.857 [2024-07-14 21:20:01.198240] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:49.857 [2024-07-14 21:20:01.198249] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:49.858 [2024-07-14 21:20:01.198258] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:49.858 [2024-07-14 21:20:01.198267] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:49.858 [2024-07-14 21:20:01.198277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:49.858 [2024-07-14 21:20:01.198287] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:49.858 [2024-07-14 21:20:01.198296] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.858 [2024-07-14 21:20:01.198305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:49.858 [2024-07-14 21:20:01.198314] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:49.858 [2024-07-14 21:20:01.198336] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.858 [2024-07-14 21:20:01.198346] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:49.858 [2024-07-14 21:20:01.198355] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:49.858 [2024-07-14 21:20:01.198365] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:49.858 [2024-07-14 21:20:01.198374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:49.858 [2024-07-14 21:20:01.198384] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:49.858 [2024-07-14 21:20:01.198393] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:49.858 [2024-07-14 21:20:01.198402] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:49.858 [2024-07-14 21:20:01.198411] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:49.858 [2024-07-14 21:20:01.198420] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:49.858 [2024-07-14 21:20:01.198429] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:49.858 [2024-07-14 21:20:01.198439] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:49.858 [2024-07-14 21:20:01.198448] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:49.858 [2024-07-14 21:20:01.198457] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:49.858 [2024-07-14 21:20:01.198466] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:49.858 [2024-07-14 21:20:01.198475] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:49.858 [2024-07-14 21:20:01.198484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:49.858 [2024-07-14 21:20:01.198494] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:49.858 [2024-07-14 21:20:01.198502] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:49.858 [2024-07-14 21:20:01.198512] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:49.858 [2024-07-14 21:20:01.198521] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:49.858 [2024-07-14 21:20:01.198530] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.858 [2024-07-14 21:20:01.198539] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:49.858 [2024-07-14 21:20:01.198549] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:49.858 [2024-07-14 21:20:01.198558] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.858 [2024-07-14 21:20:01.198566] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:49.858 [2024-07-14 21:20:01.198576] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:49.858 [2024-07-14 21:20:01.198587] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:49.858 [2024-07-14 21:20:01.198597] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.858 [2024-07-14 21:20:01.198608] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:49.858 [2024-07-14 21:20:01.198617] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:49.858 [2024-07-14 21:20:01.198626] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:49.858 [2024-07-14 21:20:01.198636] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:49.858 [2024-07-14 21:20:01.198645] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:49.858 [2024-07-14 21:20:01.198654] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:49.858 [2024-07-14 21:20:01.198665] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:49.858 [2024-07-14 21:20:01.198681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:49.858 [2024-07-14 21:20:01.198693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:49.858 [2024-07-14 21:20:01.198703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:49.858 [2024-07-14 21:20:01.198713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:49.858 [2024-07-14 21:20:01.198723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:49.858 [2024-07-14 21:20:01.198733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:49.858 [2024-07-14 21:20:01.198743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:49.858 [2024-07-14 21:20:01.198753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:49.858 [2024-07-14 21:20:01.198763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:49.858 [2024-07-14 21:20:01.198773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:49.858 [2024-07-14 21:20:01.198782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:49.858 [2024-07-14 21:20:01.198792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:49.858 [2024-07-14 21:20:01.198840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:49.858 [2024-07-14 21:20:01.198852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:49.858 [2024-07-14 21:20:01.198863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:49.858 [2024-07-14 21:20:01.198873] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:49.858 [2024-07-14 21:20:01.198885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:49.858 [2024-07-14 21:20:01.198896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:49.858 [2024-07-14 21:20:01.198907] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:49.858 [2024-07-14 21:20:01.198918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:49.858 [2024-07-14 21:20:01.198928] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:49.858 [2024-07-14 21:20:01.198940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.858 [2024-07-14 21:20:01.198951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:49.858 [2024-07-14 21:20:01.198964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.857 ms 00:20:49.858 [2024-07-14 21:20:01.198974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.858 [2024-07-14 21:20:01.236474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.858 [2024-07-14 21:20:01.236556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:49.858 [2024-07-14 21:20:01.236575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.429 ms 00:20:49.858 [2024-07-14 21:20:01.236591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.858 [2024-07-14 21:20:01.236798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.858 [2024-07-14 21:20:01.236873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:49.858 [2024-07-14 21:20:01.236922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:49.858 [2024-07-14 21:20:01.236933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.858 [2024-07-14 21:20:01.270950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.858 [2024-07-14 21:20:01.271025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:49.858 [2024-07-14 21:20:01.271043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.984 ms 00:20:49.858 [2024-07-14 21:20:01.271053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.858 [2024-07-14 21:20:01.271197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.858 [2024-07-14 21:20:01.271230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:49.858 [2024-07-14 21:20:01.271242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:49.858 [2024-07-14 21:20:01.271263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.858 [2024-07-14 21:20:01.271620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.858 [2024-07-14 21:20:01.271648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:49.858 [2024-07-14 21:20:01.271662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:20:49.858 [2024-07-14 21:20:01.271672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.858 [2024-07-14 21:20:01.271867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.858 [2024-07-14 21:20:01.271900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:49.858 [2024-07-14 21:20:01.271913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:20:49.858 [2024-07-14 21:20:01.271924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.858 [2024-07-14 21:20:01.287926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.858 [2024-07-14 21:20:01.287987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:49.858 [2024-07-14 21:20:01.288003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.970 ms 00:20:49.858 [2024-07-14 21:20:01.288014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.858 [2024-07-14 21:20:01.303190] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:49.858 [2024-07-14 21:20:01.303262] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:49.858 [2024-07-14 21:20:01.303294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.858 [2024-07-14 21:20:01.303305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:49.858 [2024-07-14 21:20:01.303317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.127 ms 00:20:49.858 [2024-07-14 21:20:01.303327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.858 [2024-07-14 21:20:01.329603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.858 [2024-07-14 21:20:01.329709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:49.858 [2024-07-14 21:20:01.329743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.182 ms 00:20:49.858 [2024-07-14 21:20:01.329754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.858 [2024-07-14 21:20:01.344030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.859 [2024-07-14 21:20:01.344081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:49.859 [2024-07-14 21:20:01.344111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.084 ms 00:20:49.859 [2024-07-14 21:20:01.344122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.859 [2024-07-14 21:20:01.357611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.859 [2024-07-14 21:20:01.357679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:49.859 [2024-07-14 21:20:01.357709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.411 ms 00:20:49.859 [2024-07-14 21:20:01.357719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.859 [2024-07-14 21:20:01.358565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.859 [2024-07-14 21:20:01.358610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:49.859 [2024-07-14 21:20:01.358623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:20:49.859 [2024-07-14 21:20:01.358632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.116 [2024-07-14 21:20:01.429611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.116 [2024-07-14 21:20:01.429696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:50.116 [2024-07-14 21:20:01.429731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.948 ms 00:20:50.116 [2024-07-14 21:20:01.429742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.116 [2024-07-14 21:20:01.441528] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:50.116 [2024-07-14 21:20:01.454758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.116 [2024-07-14 21:20:01.454843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:50.116 [2024-07-14 21:20:01.454879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.853 ms 00:20:50.116 [2024-07-14 21:20:01.454891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.116 [2024-07-14 21:20:01.455028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.116 [2024-07-14 21:20:01.455052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:50.116 [2024-07-14 21:20:01.455065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:50.116 [2024-07-14 21:20:01.455076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.116 [2024-07-14 21:20:01.455157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.116 [2024-07-14 21:20:01.455191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:50.116 [2024-07-14 21:20:01.455204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:50.116 [2024-07-14 21:20:01.455216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.116 [2024-07-14 21:20:01.455252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.116 [2024-07-14 21:20:01.455268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:50.116 [2024-07-14 21:20:01.455286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:50.117 [2024-07-14 21:20:01.455297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.117 [2024-07-14 21:20:01.455334] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:50.117 [2024-07-14 21:20:01.455350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.117 [2024-07-14 21:20:01.455362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:50.117 [2024-07-14 21:20:01.455374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:50.117 [2024-07-14 21:20:01.455385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.117 [2024-07-14 21:20:01.485040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.117 [2024-07-14 21:20:01.485140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:50.117 [2024-07-14 21:20:01.485176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.619 ms 00:20:50.117 [2024-07-14 21:20:01.485187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.117 [2024-07-14 21:20:01.485400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.117 [2024-07-14 21:20:01.485420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:50.117 [2024-07-14 21:20:01.485448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:50.117 [2024-07-14 21:20:01.485458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.117 [2024-07-14 21:20:01.486487] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:50.117 [2024-07-14 21:20:01.490666] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 316.591 ms, result 0 00:20:50.117 [2024-07-14 21:20:01.491617] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:50.117 [2024-07-14 21:20:01.507801] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:01.269  Copying: 26/256 [MB] (26 MBps) Copying: 50/256 [MB] (24 MBps) Copying: 74/256 [MB] (23 MBps) Copying: 98/256 [MB] (24 MBps) Copying: 121/256 [MB] (23 MBps) Copying: 143/256 [MB] (22 MBps) Copying: 166/256 [MB] (22 MBps) Copying: 189/256 [MB] (23 MBps) Copying: 212/256 [MB] (22 MBps) Copying: 234/256 [MB] (22 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-07-14 21:20:12.499976] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:01.269 [2024-07-14 21:20:12.513217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.269 [2024-07-14 21:20:12.513271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:01.269 [2024-07-14 21:20:12.513290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:01.269 [2024-07-14 21:20:12.513302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.269 [2024-07-14 21:20:12.513335] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:01.269 [2024-07-14 21:20:12.516671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.269 [2024-07-14 21:20:12.516724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:01.269 [2024-07-14 21:20:12.516750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.312 ms 00:21:01.269 [2024-07-14 21:20:12.516761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.269 [2024-07-14 21:20:12.517069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.269 [2024-07-14 21:20:12.517097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:01.269 [2024-07-14 21:20:12.517110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:21:01.269 [2024-07-14 21:20:12.517121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.269 [2024-07-14 21:20:12.520986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.269 [2024-07-14 21:20:12.521016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:01.269 [2024-07-14 21:20:12.521036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.842 ms 00:21:01.269 [2024-07-14 21:20:12.521047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.269 [2024-07-14 21:20:12.528372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.269 [2024-07-14 21:20:12.528437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:01.269 [2024-07-14 21:20:12.528467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.301 ms 00:21:01.269 [2024-07-14 21:20:12.528479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.269 [2024-07-14 21:20:12.557380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.269 [2024-07-14 21:20:12.557436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:01.269 [2024-07-14 21:20:12.557468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.828 ms 00:21:01.269 [2024-07-14 21:20:12.557478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.269 [2024-07-14 21:20:12.574426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.269 [2024-07-14 21:20:12.574481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:01.269 [2024-07-14 21:20:12.574514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.882 ms 00:21:01.269 [2024-07-14 21:20:12.574525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.269 [2024-07-14 21:20:12.574728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.269 [2024-07-14 21:20:12.574758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:01.269 [2024-07-14 21:20:12.574773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:21:01.269 [2024-07-14 21:20:12.574785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.269 [2024-07-14 21:20:12.606301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.269 [2024-07-14 21:20:12.606375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:01.269 [2024-07-14 21:20:12.606408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.480 ms 00:21:01.269 [2024-07-14 21:20:12.606419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.269 [2024-07-14 21:20:12.637193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.269 [2024-07-14 21:20:12.637249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:01.269 [2024-07-14 21:20:12.637296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.711 ms 00:21:01.269 [2024-07-14 21:20:12.637306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.269 [2024-07-14 21:20:12.665738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.269 [2024-07-14 21:20:12.665793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:01.269 [2024-07-14 21:20:12.665834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.371 ms 00:21:01.269 [2024-07-14 21:20:12.665845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.269 [2024-07-14 21:20:12.694229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.269 [2024-07-14 21:20:12.694302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:01.269 [2024-07-14 21:20:12.694334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.291 ms 00:21:01.269 [2024-07-14 21:20:12.694344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.269 [2024-07-14 21:20:12.694407] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:01.269 [2024-07-14 21:20:12.694440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:01.269 [2024-07-14 21:20:12.694973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.694984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.694995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:01.270 [2024-07-14 21:20:12.695650] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:01.270 [2024-07-14 21:20:12.695661] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b1692d4e-1846-41c9-a805-9c9f076300af 00:21:01.270 [2024-07-14 21:20:12.695673] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:01.270 [2024-07-14 21:20:12.695684] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:01.270 [2024-07-14 21:20:12.695709] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:01.270 [2024-07-14 21:20:12.695721] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:01.270 [2024-07-14 21:20:12.695731] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:01.270 [2024-07-14 21:20:12.695742] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:01.270 [2024-07-14 21:20:12.695752] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:01.270 [2024-07-14 21:20:12.695762] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:01.270 [2024-07-14 21:20:12.695772] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:01.270 [2024-07-14 21:20:12.695784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.270 [2024-07-14 21:20:12.695809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:01.270 [2024-07-14 21:20:12.695828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.378 ms 00:21:01.270 [2024-07-14 21:20:12.695840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.270 [2024-07-14 21:20:12.711997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.270 [2024-07-14 21:20:12.712065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:01.270 [2024-07-14 21:20:12.712082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.129 ms 00:21:01.270 [2024-07-14 21:20:12.712094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.270 [2024-07-14 21:20:12.712582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.270 [2024-07-14 21:20:12.712621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:01.270 [2024-07-14 21:20:12.712635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:21:01.270 [2024-07-14 21:20:12.712646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.270 [2024-07-14 21:20:12.750188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.270 [2024-07-14 21:20:12.750265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:01.270 [2024-07-14 21:20:12.750298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.270 [2024-07-14 21:20:12.750309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.270 [2024-07-14 21:20:12.750416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.270 [2024-07-14 21:20:12.750439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:01.270 [2024-07-14 21:20:12.750451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.270 [2024-07-14 21:20:12.750461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.270 [2024-07-14 21:20:12.750540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.270 [2024-07-14 21:20:12.750588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:01.270 [2024-07-14 21:20:12.750600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.270 [2024-07-14 21:20:12.750611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.270 [2024-07-14 21:20:12.750635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.270 [2024-07-14 21:20:12.750650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:01.270 [2024-07-14 21:20:12.750667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.270 [2024-07-14 21:20:12.750678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.530 [2024-07-14 21:20:12.842196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.530 [2024-07-14 21:20:12.842274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:01.530 [2024-07-14 21:20:12.842307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.530 [2024-07-14 21:20:12.842317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.530 [2024-07-14 21:20:12.925053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.530 [2024-07-14 21:20:12.925136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:01.530 [2024-07-14 21:20:12.925179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.530 [2024-07-14 21:20:12.925190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.530 [2024-07-14 21:20:12.925270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.530 [2024-07-14 21:20:12.925287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:01.530 [2024-07-14 21:20:12.925299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.530 [2024-07-14 21:20:12.925310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.530 [2024-07-14 21:20:12.925344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.530 [2024-07-14 21:20:12.925358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:01.530 [2024-07-14 21:20:12.925386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.530 [2024-07-14 21:20:12.925402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.530 [2024-07-14 21:20:12.925524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.530 [2024-07-14 21:20:12.925544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:01.530 [2024-07-14 21:20:12.925556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.530 [2024-07-14 21:20:12.925567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.530 [2024-07-14 21:20:12.925616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.530 [2024-07-14 21:20:12.925640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:01.530 [2024-07-14 21:20:12.925652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.530 [2024-07-14 21:20:12.925663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.530 [2024-07-14 21:20:12.925717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.530 [2024-07-14 21:20:12.925747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:01.530 [2024-07-14 21:20:12.925760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.530 [2024-07-14 21:20:12.925771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.530 [2024-07-14 21:20:12.925844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.530 [2024-07-14 21:20:12.925863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:01.530 [2024-07-14 21:20:12.925876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.530 [2024-07-14 21:20:12.925892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.530 [2024-07-14 21:20:12.926055] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 412.844 ms, result 0 00:21:02.466 00:21:02.466 00:21:02.466 21:20:13 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:03.032 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:21:03.032 21:20:14 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:21:03.032 21:20:14 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:21:03.032 21:20:14 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:03.032 21:20:14 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:03.032 21:20:14 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:21:03.291 21:20:14 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:03.291 Process with pid 80594 is not found 00:21:03.291 21:20:14 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 80594 00:21:03.291 21:20:14 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 80594 ']' 00:21:03.291 21:20:14 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 80594 00:21:03.291 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (80594) - No such process 00:21:03.291 21:20:14 ftl.ftl_trim -- common/autotest_common.sh@975 -- # echo 'Process with pid 80594 is not found' 00:21:03.291 00:21:03.291 real 1m8.232s 00:21:03.291 user 1m32.522s 00:21:03.291 sys 0m6.151s 00:21:03.291 21:20:14 ftl.ftl_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:03.291 ************************************ 00:21:03.291 21:20:14 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:03.291 END TEST ftl_trim 00:21:03.291 ************************************ 00:21:03.291 21:20:14 ftl -- common/autotest_common.sh@1142 -- # return 0 00:21:03.291 21:20:14 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:03.291 21:20:14 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:03.291 21:20:14 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:03.291 21:20:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:03.291 ************************************ 00:21:03.291 START TEST ftl_restore 00:21:03.291 ************************************ 00:21:03.291 21:20:14 ftl.ftl_restore -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:03.291 * Looking for test storage... 00:21:03.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.tGlfeWPPZ6 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=80850 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 80850 00:21:03.292 21:20:14 ftl.ftl_restore -- common/autotest_common.sh@829 -- # '[' -z 80850 ']' 00:21:03.292 21:20:14 ftl.ftl_restore -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.292 21:20:14 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:03.292 21:20:14 ftl.ftl_restore -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:03.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.292 21:20:14 ftl.ftl_restore -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.292 21:20:14 ftl.ftl_restore -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:03.292 21:20:14 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:03.551 [2024-07-14 21:20:14.929332] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:03.551 [2024-07-14 21:20:14.929491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80850 ] 00:21:03.551 [2024-07-14 21:20:15.092551] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.118 [2024-07-14 21:20:15.355609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.685 21:20:16 ftl.ftl_restore -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:04.685 21:20:16 ftl.ftl_restore -- common/autotest_common.sh@862 -- # return 0 00:21:04.685 21:20:16 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:04.685 21:20:16 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:21:04.685 21:20:16 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:04.685 21:20:16 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:21:04.686 21:20:16 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:21:04.686 21:20:16 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:04.944 21:20:16 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:04.944 21:20:16 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:21:04.944 21:20:16 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:04.944 21:20:16 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:21:04.944 21:20:16 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:04.944 21:20:16 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:21:04.944 21:20:16 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:21:04.944 21:20:16 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:05.203 21:20:16 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:05.203 { 00:21:05.203 "name": "nvme0n1", 00:21:05.203 "aliases": [ 00:21:05.203 "3f6410c4-28fc-498e-8d7e-63ecfef8ec01" 00:21:05.203 ], 00:21:05.203 "product_name": "NVMe disk", 00:21:05.203 "block_size": 4096, 00:21:05.203 "num_blocks": 1310720, 00:21:05.203 "uuid": "3f6410c4-28fc-498e-8d7e-63ecfef8ec01", 00:21:05.203 "assigned_rate_limits": { 00:21:05.203 "rw_ios_per_sec": 0, 00:21:05.203 "rw_mbytes_per_sec": 0, 00:21:05.203 "r_mbytes_per_sec": 0, 00:21:05.203 "w_mbytes_per_sec": 0 00:21:05.203 }, 00:21:05.203 "claimed": true, 00:21:05.203 "claim_type": "read_many_write_one", 00:21:05.203 "zoned": false, 00:21:05.203 "supported_io_types": { 00:21:05.203 "read": true, 00:21:05.203 "write": true, 00:21:05.203 "unmap": true, 00:21:05.203 "flush": true, 00:21:05.203 "reset": true, 00:21:05.203 "nvme_admin": true, 00:21:05.203 "nvme_io": true, 00:21:05.203 "nvme_io_md": false, 00:21:05.203 "write_zeroes": true, 00:21:05.203 "zcopy": false, 00:21:05.203 "get_zone_info": false, 00:21:05.203 "zone_management": false, 00:21:05.203 "zone_append": false, 00:21:05.203 "compare": true, 00:21:05.203 "compare_and_write": false, 00:21:05.203 "abort": true, 00:21:05.203 "seek_hole": false, 00:21:05.203 "seek_data": false, 00:21:05.203 "copy": true, 00:21:05.203 "nvme_iov_md": false 00:21:05.203 }, 00:21:05.203 "driver_specific": { 00:21:05.203 "nvme": [ 00:21:05.203 { 00:21:05.203 "pci_address": "0000:00:11.0", 00:21:05.203 "trid": { 00:21:05.203 "trtype": "PCIe", 00:21:05.203 "traddr": "0000:00:11.0" 00:21:05.203 }, 00:21:05.203 "ctrlr_data": { 00:21:05.203 "cntlid": 0, 00:21:05.203 "vendor_id": "0x1b36", 00:21:05.203 "model_number": "QEMU NVMe Ctrl", 00:21:05.203 "serial_number": "12341", 00:21:05.203 "firmware_revision": "8.0.0", 00:21:05.203 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:05.203 "oacs": { 00:21:05.203 "security": 0, 00:21:05.203 "format": 1, 00:21:05.203 "firmware": 0, 00:21:05.203 "ns_manage": 1 00:21:05.203 }, 00:21:05.203 "multi_ctrlr": false, 00:21:05.203 "ana_reporting": false 00:21:05.203 }, 00:21:05.203 "vs": { 00:21:05.203 "nvme_version": "1.4" 00:21:05.203 }, 00:21:05.203 "ns_data": { 00:21:05.203 "id": 1, 00:21:05.203 "can_share": false 00:21:05.203 } 00:21:05.203 } 00:21:05.203 ], 00:21:05.203 "mp_policy": "active_passive" 00:21:05.203 } 00:21:05.203 } 00:21:05.203 ]' 00:21:05.203 21:20:16 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:05.203 21:20:16 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:21:05.203 21:20:16 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:05.203 21:20:16 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:21:05.204 21:20:16 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:21:05.204 21:20:16 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:21:05.204 21:20:16 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:21:05.204 21:20:16 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:05.204 21:20:16 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:21:05.204 21:20:16 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:05.204 21:20:16 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:05.462 21:20:16 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=545d5dfe-9626-4097-9cd9-0ac04e846900 00:21:05.462 21:20:16 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:21:05.462 21:20:16 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 545d5dfe-9626-4097-9cd9-0ac04e846900 00:21:06.029 21:20:17 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:06.029 21:20:17 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=50a4b650-b76a-4a84-9c68-d118af8ef27d 00:21:06.029 21:20:17 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 50a4b650-b76a-4a84-9c68-d118af8ef27d 00:21:06.288 21:20:17 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=e327cf1c-bc27-41f1-b996-aa058bf17836 00:21:06.288 21:20:17 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:21:06.288 21:20:17 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e327cf1c-bc27-41f1-b996-aa058bf17836 00:21:06.288 21:20:17 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:21:06.288 21:20:17 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:06.288 21:20:17 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=e327cf1c-bc27-41f1-b996-aa058bf17836 00:21:06.288 21:20:17 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:21:06.288 21:20:17 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size e327cf1c-bc27-41f1-b996-aa058bf17836 00:21:06.288 21:20:17 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=e327cf1c-bc27-41f1-b996-aa058bf17836 00:21:06.288 21:20:17 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:06.288 21:20:17 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:21:06.288 21:20:17 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:21:06.288 21:20:17 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e327cf1c-bc27-41f1-b996-aa058bf17836 00:21:06.546 21:20:18 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:06.546 { 00:21:06.546 "name": "e327cf1c-bc27-41f1-b996-aa058bf17836", 00:21:06.546 "aliases": [ 00:21:06.546 "lvs/nvme0n1p0" 00:21:06.546 ], 00:21:06.546 "product_name": "Logical Volume", 00:21:06.546 "block_size": 4096, 00:21:06.546 "num_blocks": 26476544, 00:21:06.546 "uuid": "e327cf1c-bc27-41f1-b996-aa058bf17836", 00:21:06.546 "assigned_rate_limits": { 00:21:06.546 "rw_ios_per_sec": 0, 00:21:06.546 "rw_mbytes_per_sec": 0, 00:21:06.546 "r_mbytes_per_sec": 0, 00:21:06.546 "w_mbytes_per_sec": 0 00:21:06.546 }, 00:21:06.546 "claimed": false, 00:21:06.546 "zoned": false, 00:21:06.546 "supported_io_types": { 00:21:06.546 "read": true, 00:21:06.546 "write": true, 00:21:06.546 "unmap": true, 00:21:06.546 "flush": false, 00:21:06.546 "reset": true, 00:21:06.546 "nvme_admin": false, 00:21:06.546 "nvme_io": false, 00:21:06.546 "nvme_io_md": false, 00:21:06.546 "write_zeroes": true, 00:21:06.546 "zcopy": false, 00:21:06.546 "get_zone_info": false, 00:21:06.546 "zone_management": false, 00:21:06.546 "zone_append": false, 00:21:06.546 "compare": false, 00:21:06.546 "compare_and_write": false, 00:21:06.546 "abort": false, 00:21:06.546 "seek_hole": true, 00:21:06.546 "seek_data": true, 00:21:06.546 "copy": false, 00:21:06.546 "nvme_iov_md": false 00:21:06.546 }, 00:21:06.546 "driver_specific": { 00:21:06.546 "lvol": { 00:21:06.546 "lvol_store_uuid": "50a4b650-b76a-4a84-9c68-d118af8ef27d", 00:21:06.546 "base_bdev": "nvme0n1", 00:21:06.546 "thin_provision": true, 00:21:06.546 "num_allocated_clusters": 0, 00:21:06.546 "snapshot": false, 00:21:06.546 "clone": false, 00:21:06.546 "esnap_clone": false 00:21:06.546 } 00:21:06.546 } 00:21:06.546 } 00:21:06.546 ]' 00:21:06.546 21:20:18 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:06.546 21:20:18 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:21:06.546 21:20:18 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:06.804 21:20:18 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:06.804 21:20:18 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:06.804 21:20:18 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:21:06.804 21:20:18 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:21:06.804 21:20:18 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:21:06.804 21:20:18 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:07.063 21:20:18 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:07.063 21:20:18 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:07.063 21:20:18 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size e327cf1c-bc27-41f1-b996-aa058bf17836 00:21:07.063 21:20:18 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=e327cf1c-bc27-41f1-b996-aa058bf17836 00:21:07.063 21:20:18 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:07.063 21:20:18 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:21:07.063 21:20:18 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:21:07.063 21:20:18 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e327cf1c-bc27-41f1-b996-aa058bf17836 00:21:07.321 21:20:18 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:07.321 { 00:21:07.321 "name": "e327cf1c-bc27-41f1-b996-aa058bf17836", 00:21:07.321 "aliases": [ 00:21:07.321 "lvs/nvme0n1p0" 00:21:07.321 ], 00:21:07.321 "product_name": "Logical Volume", 00:21:07.321 "block_size": 4096, 00:21:07.321 "num_blocks": 26476544, 00:21:07.321 "uuid": "e327cf1c-bc27-41f1-b996-aa058bf17836", 00:21:07.321 "assigned_rate_limits": { 00:21:07.321 "rw_ios_per_sec": 0, 00:21:07.321 "rw_mbytes_per_sec": 0, 00:21:07.321 "r_mbytes_per_sec": 0, 00:21:07.321 "w_mbytes_per_sec": 0 00:21:07.321 }, 00:21:07.321 "claimed": false, 00:21:07.321 "zoned": false, 00:21:07.321 "supported_io_types": { 00:21:07.321 "read": true, 00:21:07.321 "write": true, 00:21:07.321 "unmap": true, 00:21:07.321 "flush": false, 00:21:07.321 "reset": true, 00:21:07.321 "nvme_admin": false, 00:21:07.321 "nvme_io": false, 00:21:07.321 "nvme_io_md": false, 00:21:07.321 "write_zeroes": true, 00:21:07.321 "zcopy": false, 00:21:07.321 "get_zone_info": false, 00:21:07.321 "zone_management": false, 00:21:07.321 "zone_append": false, 00:21:07.321 "compare": false, 00:21:07.321 "compare_and_write": false, 00:21:07.321 "abort": false, 00:21:07.321 "seek_hole": true, 00:21:07.321 "seek_data": true, 00:21:07.321 "copy": false, 00:21:07.321 "nvme_iov_md": false 00:21:07.321 }, 00:21:07.321 "driver_specific": { 00:21:07.321 "lvol": { 00:21:07.321 "lvol_store_uuid": "50a4b650-b76a-4a84-9c68-d118af8ef27d", 00:21:07.321 "base_bdev": "nvme0n1", 00:21:07.321 "thin_provision": true, 00:21:07.321 "num_allocated_clusters": 0, 00:21:07.321 "snapshot": false, 00:21:07.321 "clone": false, 00:21:07.321 "esnap_clone": false 00:21:07.321 } 00:21:07.321 } 00:21:07.321 } 00:21:07.321 ]' 00:21:07.321 21:20:18 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:07.321 21:20:18 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:21:07.321 21:20:18 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:07.321 21:20:18 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:07.321 21:20:18 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:07.321 21:20:18 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:21:07.321 21:20:18 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:21:07.321 21:20:18 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:07.580 21:20:18 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:21:07.580 21:20:18 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size e327cf1c-bc27-41f1-b996-aa058bf17836 00:21:07.580 21:20:18 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=e327cf1c-bc27-41f1-b996-aa058bf17836 00:21:07.580 21:20:18 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:07.580 21:20:18 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:21:07.580 21:20:18 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:21:07.580 21:20:18 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e327cf1c-bc27-41f1-b996-aa058bf17836 00:21:07.838 21:20:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:07.838 { 00:21:07.838 "name": "e327cf1c-bc27-41f1-b996-aa058bf17836", 00:21:07.838 "aliases": [ 00:21:07.838 "lvs/nvme0n1p0" 00:21:07.838 ], 00:21:07.838 "product_name": "Logical Volume", 00:21:07.838 "block_size": 4096, 00:21:07.838 "num_blocks": 26476544, 00:21:07.838 "uuid": "e327cf1c-bc27-41f1-b996-aa058bf17836", 00:21:07.838 "assigned_rate_limits": { 00:21:07.838 "rw_ios_per_sec": 0, 00:21:07.838 "rw_mbytes_per_sec": 0, 00:21:07.838 "r_mbytes_per_sec": 0, 00:21:07.838 "w_mbytes_per_sec": 0 00:21:07.838 }, 00:21:07.838 "claimed": false, 00:21:07.838 "zoned": false, 00:21:07.838 "supported_io_types": { 00:21:07.838 "read": true, 00:21:07.838 "write": true, 00:21:07.838 "unmap": true, 00:21:07.838 "flush": false, 00:21:07.838 "reset": true, 00:21:07.838 "nvme_admin": false, 00:21:07.838 "nvme_io": false, 00:21:07.838 "nvme_io_md": false, 00:21:07.838 "write_zeroes": true, 00:21:07.838 "zcopy": false, 00:21:07.838 "get_zone_info": false, 00:21:07.838 "zone_management": false, 00:21:07.838 "zone_append": false, 00:21:07.838 "compare": false, 00:21:07.838 "compare_and_write": false, 00:21:07.838 "abort": false, 00:21:07.838 "seek_hole": true, 00:21:07.838 "seek_data": true, 00:21:07.839 "copy": false, 00:21:07.839 "nvme_iov_md": false 00:21:07.839 }, 00:21:07.839 "driver_specific": { 00:21:07.839 "lvol": { 00:21:07.839 "lvol_store_uuid": "50a4b650-b76a-4a84-9c68-d118af8ef27d", 00:21:07.839 "base_bdev": "nvme0n1", 00:21:07.839 "thin_provision": true, 00:21:07.839 "num_allocated_clusters": 0, 00:21:07.839 "snapshot": false, 00:21:07.839 "clone": false, 00:21:07.839 "esnap_clone": false 00:21:07.839 } 00:21:07.839 } 00:21:07.839 } 00:21:07.839 ]' 00:21:07.839 21:20:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:07.839 21:20:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:21:07.839 21:20:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:07.839 21:20:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:07.839 21:20:19 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:07.839 21:20:19 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:21:07.839 21:20:19 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:21:07.839 21:20:19 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d e327cf1c-bc27-41f1-b996-aa058bf17836 --l2p_dram_limit 10' 00:21:07.839 21:20:19 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:21:07.839 21:20:19 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:07.839 21:20:19 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:07.839 21:20:19 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:21:07.839 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:21:07.839 21:20:19 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e327cf1c-bc27-41f1-b996-aa058bf17836 --l2p_dram_limit 10 -c nvc0n1p0 00:21:08.098 [2024-07-14 21:20:19.518966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.098 [2024-07-14 21:20:19.519040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:08.098 [2024-07-14 21:20:19.519060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:08.098 [2024-07-14 21:20:19.519074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.098 [2024-07-14 21:20:19.519153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.098 [2024-07-14 21:20:19.519174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:08.098 [2024-07-14 21:20:19.519187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:08.098 [2024-07-14 21:20:19.519200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.098 [2024-07-14 21:20:19.519229] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:08.098 [2024-07-14 21:20:19.520221] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:08.098 [2024-07-14 21:20:19.520249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.098 [2024-07-14 21:20:19.520267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:08.098 [2024-07-14 21:20:19.520280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.027 ms 00:21:08.098 [2024-07-14 21:20:19.520293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.098 [2024-07-14 21:20:19.520458] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 183c2ef8-c7fe-444a-9b23-fb17f92b76cf 00:21:08.098 [2024-07-14 21:20:19.521628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.098 [2024-07-14 21:20:19.521681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:08.098 [2024-07-14 21:20:19.521700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:08.098 [2024-07-14 21:20:19.521727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.098 [2024-07-14 21:20:19.526303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.098 [2024-07-14 21:20:19.526379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:08.099 [2024-07-14 21:20:19.526401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.485 ms 00:21:08.099 [2024-07-14 21:20:19.526413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.099 [2024-07-14 21:20:19.526557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.099 [2024-07-14 21:20:19.526578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:08.099 [2024-07-14 21:20:19.526594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:21:08.099 [2024-07-14 21:20:19.526606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.099 [2024-07-14 21:20:19.526680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.099 [2024-07-14 21:20:19.526699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:08.099 [2024-07-14 21:20:19.526714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:08.099 [2024-07-14 21:20:19.526737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.099 [2024-07-14 21:20:19.526773] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:08.099 [2024-07-14 21:20:19.531503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.099 [2024-07-14 21:20:19.531546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:08.099 [2024-07-14 21:20:19.531563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.742 ms 00:21:08.099 [2024-07-14 21:20:19.531579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.099 [2024-07-14 21:20:19.531629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.099 [2024-07-14 21:20:19.531650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:08.099 [2024-07-14 21:20:19.531665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:08.099 [2024-07-14 21:20:19.531678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.099 [2024-07-14 21:20:19.531745] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:08.099 [2024-07-14 21:20:19.531938] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:08.099 [2024-07-14 21:20:19.531963] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:08.099 [2024-07-14 21:20:19.531985] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:08.099 [2024-07-14 21:20:19.532001] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:08.099 [2024-07-14 21:20:19.532017] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:08.099 [2024-07-14 21:20:19.532031] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:08.099 [2024-07-14 21:20:19.532045] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:08.099 [2024-07-14 21:20:19.532059] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:08.099 [2024-07-14 21:20:19.532075] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:08.099 [2024-07-14 21:20:19.532088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.099 [2024-07-14 21:20:19.532102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:08.099 [2024-07-14 21:20:19.532116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:21:08.099 [2024-07-14 21:20:19.532130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.099 [2024-07-14 21:20:19.532224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.099 [2024-07-14 21:20:19.532243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:08.099 [2024-07-14 21:20:19.532257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:08.099 [2024-07-14 21:20:19.532270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.099 [2024-07-14 21:20:19.532384] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:08.099 [2024-07-14 21:20:19.532421] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:08.099 [2024-07-14 21:20:19.532446] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:08.099 [2024-07-14 21:20:19.532462] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:08.099 [2024-07-14 21:20:19.532475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:08.099 [2024-07-14 21:20:19.532488] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:08.099 [2024-07-14 21:20:19.532500] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:08.099 [2024-07-14 21:20:19.532515] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:08.099 [2024-07-14 21:20:19.532527] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:08.099 [2024-07-14 21:20:19.532540] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:08.099 [2024-07-14 21:20:19.532551] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:08.099 [2024-07-14 21:20:19.532573] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:08.099 [2024-07-14 21:20:19.532584] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:08.099 [2024-07-14 21:20:19.532600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:08.099 [2024-07-14 21:20:19.532612] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:08.099 [2024-07-14 21:20:19.532625] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:08.099 [2024-07-14 21:20:19.532636] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:08.099 [2024-07-14 21:20:19.532653] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:08.099 [2024-07-14 21:20:19.532664] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:08.099 [2024-07-14 21:20:19.532677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:08.099 [2024-07-14 21:20:19.532689] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:08.099 [2024-07-14 21:20:19.532703] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:08.099 [2024-07-14 21:20:19.532722] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:08.099 [2024-07-14 21:20:19.532745] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:08.099 [2024-07-14 21:20:19.532758] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:08.099 [2024-07-14 21:20:19.532772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:08.099 [2024-07-14 21:20:19.532783] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:08.099 [2024-07-14 21:20:19.532810] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:08.099 [2024-07-14 21:20:19.532826] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:08.099 [2024-07-14 21:20:19.532840] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:08.099 [2024-07-14 21:20:19.532851] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:08.099 [2024-07-14 21:20:19.532864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:08.099 [2024-07-14 21:20:19.532876] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:08.099 [2024-07-14 21:20:19.532891] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:08.099 [2024-07-14 21:20:19.532918] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:08.099 [2024-07-14 21:20:19.532931] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:08.099 [2024-07-14 21:20:19.532943] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:08.099 [2024-07-14 21:20:19.532956] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:08.099 [2024-07-14 21:20:19.532967] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:08.099 [2024-07-14 21:20:19.532983] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:08.099 [2024-07-14 21:20:19.532995] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:08.099 [2024-07-14 21:20:19.533023] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:08.099 [2024-07-14 21:20:19.533033] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:08.099 [2024-07-14 21:20:19.533060] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:08.099 [2024-07-14 21:20:19.533071] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:08.099 [2024-07-14 21:20:19.533084] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:08.099 [2024-07-14 21:20:19.533095] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:08.099 [2024-07-14 21:20:19.533109] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:08.099 [2024-07-14 21:20:19.533120] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:08.099 [2024-07-14 21:20:19.533134] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:08.099 [2024-07-14 21:20:19.533145] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:08.099 [2024-07-14 21:20:19.533157] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:08.099 [2024-07-14 21:20:19.533167] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:08.099 [2024-07-14 21:20:19.533184] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:08.099 [2024-07-14 21:20:19.533198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:08.099 [2024-07-14 21:20:19.533216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:08.099 [2024-07-14 21:20:19.533227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:08.099 [2024-07-14 21:20:19.533240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:08.099 [2024-07-14 21:20:19.533252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:08.099 [2024-07-14 21:20:19.533264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:08.099 [2024-07-14 21:20:19.533275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:08.099 [2024-07-14 21:20:19.533288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:08.099 [2024-07-14 21:20:19.533299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:08.099 [2024-07-14 21:20:19.533314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:08.099 [2024-07-14 21:20:19.533325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:08.099 [2024-07-14 21:20:19.533340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:08.099 [2024-07-14 21:20:19.533351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:08.099 [2024-07-14 21:20:19.533364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:08.099 [2024-07-14 21:20:19.533376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:08.099 [2024-07-14 21:20:19.533389] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:08.100 [2024-07-14 21:20:19.533401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:08.100 [2024-07-14 21:20:19.533416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:08.100 [2024-07-14 21:20:19.533428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:08.100 [2024-07-14 21:20:19.533441] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:08.100 [2024-07-14 21:20:19.533453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:08.100 [2024-07-14 21:20:19.533467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.100 [2024-07-14 21:20:19.533479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:08.100 [2024-07-14 21:20:19.533492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.147 ms 00:21:08.100 [2024-07-14 21:20:19.533503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.100 [2024-07-14 21:20:19.533558] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:08.100 [2024-07-14 21:20:19.533575] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:10.633 [2024-07-14 21:20:21.681731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.633 [2024-07-14 21:20:21.681837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:10.633 [2024-07-14 21:20:21.681863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2148.177 ms 00:21:10.633 [2024-07-14 21:20:21.681877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.633 [2024-07-14 21:20:21.716083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.633 [2024-07-14 21:20:21.716138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:10.633 [2024-07-14 21:20:21.716162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.932 ms 00:21:10.633 [2024-07-14 21:20:21.716175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.633 [2024-07-14 21:20:21.716438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.633 [2024-07-14 21:20:21.716461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:10.633 [2024-07-14 21:20:21.716478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:21:10.633 [2024-07-14 21:20:21.716493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.633 [2024-07-14 21:20:21.755097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.633 [2024-07-14 21:20:21.755168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:10.633 [2024-07-14 21:20:21.755189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.541 ms 00:21:10.633 [2024-07-14 21:20:21.755202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.633 [2024-07-14 21:20:21.755279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.633 [2024-07-14 21:20:21.755302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:10.633 [2024-07-14 21:20:21.755317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:10.633 [2024-07-14 21:20:21.755329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.633 [2024-07-14 21:20:21.755728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.633 [2024-07-14 21:20:21.755748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:10.633 [2024-07-14 21:20:21.755763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:21:10.633 [2024-07-14 21:20:21.755775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.633 [2024-07-14 21:20:21.755985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.633 [2024-07-14 21:20:21.756007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:10.633 [2024-07-14 21:20:21.756026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:21:10.633 [2024-07-14 21:20:21.756039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.633 [2024-07-14 21:20:21.774046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.633 [2024-07-14 21:20:21.774098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:10.633 [2024-07-14 21:20:21.774121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.975 ms 00:21:10.633 [2024-07-14 21:20:21.774134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.633 [2024-07-14 21:20:21.788243] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:10.633 [2024-07-14 21:20:21.791203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.633 [2024-07-14 21:20:21.791257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:10.633 [2024-07-14 21:20:21.791274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.932 ms 00:21:10.633 [2024-07-14 21:20:21.791290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.633 [2024-07-14 21:20:21.863484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.633 [2024-07-14 21:20:21.863568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:10.633 [2024-07-14 21:20:21.863589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.147 ms 00:21:10.633 [2024-07-14 21:20:21.863603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.633 [2024-07-14 21:20:21.863851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.633 [2024-07-14 21:20:21.863890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:10.633 [2024-07-14 21:20:21.863904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:21:10.633 [2024-07-14 21:20:21.863920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.633 [2024-07-14 21:20:21.894971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.633 [2024-07-14 21:20:21.895034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:10.633 [2024-07-14 21:20:21.895058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.984 ms 00:21:10.633 [2024-07-14 21:20:21.895073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.633 [2024-07-14 21:20:21.925967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.633 [2024-07-14 21:20:21.926028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:10.633 [2024-07-14 21:20:21.926046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.841 ms 00:21:10.633 [2024-07-14 21:20:21.926060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.633 [2024-07-14 21:20:21.926807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.633 [2024-07-14 21:20:21.926855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:10.633 [2024-07-14 21:20:21.926872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:21:10.633 [2024-07-14 21:20:21.926890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.633 [2024-07-14 21:20:22.014029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.633 [2024-07-14 21:20:22.014123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:10.633 [2024-07-14 21:20:22.014144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.072 ms 00:21:10.633 [2024-07-14 21:20:22.014162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.633 [2024-07-14 21:20:22.045838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.633 [2024-07-14 21:20:22.045905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:10.633 [2024-07-14 21:20:22.045923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.623 ms 00:21:10.633 [2024-07-14 21:20:22.045953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.633 [2024-07-14 21:20:22.076944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.633 [2024-07-14 21:20:22.077004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:10.633 [2024-07-14 21:20:22.077036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.943 ms 00:21:10.633 [2024-07-14 21:20:22.077048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.633 [2024-07-14 21:20:22.107954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.633 [2024-07-14 21:20:22.108012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:10.633 [2024-07-14 21:20:22.108030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.859 ms 00:21:10.633 [2024-07-14 21:20:22.108045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.633 [2024-07-14 21:20:22.108113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.633 [2024-07-14 21:20:22.108152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:10.633 [2024-07-14 21:20:22.108166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:10.633 [2024-07-14 21:20:22.108211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.633 [2024-07-14 21:20:22.108369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.633 [2024-07-14 21:20:22.108421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:10.633 [2024-07-14 21:20:22.108440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:10.633 [2024-07-14 21:20:22.108454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.633 [2024-07-14 21:20:22.109600] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2590.114 ms, result 0 00:21:10.633 { 00:21:10.633 "name": "ftl0", 00:21:10.633 "uuid": "183c2ef8-c7fe-444a-9b23-fb17f92b76cf" 00:21:10.633 } 00:21:10.633 21:20:22 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:10.633 21:20:22 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:10.892 21:20:22 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:10.892 21:20:22 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:11.150 [2024-07-14 21:20:22.633117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.150 [2024-07-14 21:20:22.633191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:11.150 [2024-07-14 21:20:22.633214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:11.150 [2024-07-14 21:20:22.633226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.150 [2024-07-14 21:20:22.633281] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:11.150 [2024-07-14 21:20:22.636658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.150 [2024-07-14 21:20:22.636710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:11.150 [2024-07-14 21:20:22.636756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.353 ms 00:21:11.150 [2024-07-14 21:20:22.636771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.150 [2024-07-14 21:20:22.637146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.150 [2024-07-14 21:20:22.637184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:11.150 [2024-07-14 21:20:22.637214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:21:11.151 [2024-07-14 21:20:22.637229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.151 [2024-07-14 21:20:22.640759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.151 [2024-07-14 21:20:22.640805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:11.151 [2024-07-14 21:20:22.640845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.504 ms 00:21:11.151 [2024-07-14 21:20:22.640859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.151 [2024-07-14 21:20:22.647536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.151 [2024-07-14 21:20:22.647584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:11.151 [2024-07-14 21:20:22.647601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.651 ms 00:21:11.151 [2024-07-14 21:20:22.647615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.151 [2024-07-14 21:20:22.678550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.151 [2024-07-14 21:20:22.678633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:11.151 [2024-07-14 21:20:22.678652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.835 ms 00:21:11.151 [2024-07-14 21:20:22.678666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.417 [2024-07-14 21:20:22.698507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.417 [2024-07-14 21:20:22.698574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:11.417 [2024-07-14 21:20:22.698592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.790 ms 00:21:11.417 [2024-07-14 21:20:22.698607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.417 [2024-07-14 21:20:22.698824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.417 [2024-07-14 21:20:22.698885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:11.417 [2024-07-14 21:20:22.698899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:21:11.417 [2024-07-14 21:20:22.698914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.417 [2024-07-14 21:20:22.730832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.417 [2024-07-14 21:20:22.730882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:11.417 [2024-07-14 21:20:22.730901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.891 ms 00:21:11.417 [2024-07-14 21:20:22.730915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.417 [2024-07-14 21:20:22.761727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.417 [2024-07-14 21:20:22.761788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:11.417 [2024-07-14 21:20:22.761804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.760 ms 00:21:11.417 [2024-07-14 21:20:22.761826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.417 [2024-07-14 21:20:22.792578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.417 [2024-07-14 21:20:22.792626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:11.417 [2024-07-14 21:20:22.792644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.695 ms 00:21:11.417 [2024-07-14 21:20:22.792658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.417 [2024-07-14 21:20:22.824582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.417 [2024-07-14 21:20:22.824628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:11.417 [2024-07-14 21:20:22.824646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.803 ms 00:21:11.417 [2024-07-14 21:20:22.824660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.417 [2024-07-14 21:20:22.824710] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:11.417 [2024-07-14 21:20:22.824761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.824776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.824805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.824829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.824860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.824873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.824888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.824900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.824918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.824931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.824945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.824958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.824972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.824984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.824998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:11.417 [2024-07-14 21:20:22.825595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.825986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.826000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.826013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.826027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.826040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.826054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.826066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.826083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.826095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.826110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.826126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.826142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.826155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.826169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.826181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.826195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.826207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.826223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.826236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-07-14 21:20:22.826260] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:11.418 [2024-07-14 21:20:22.826274] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 183c2ef8-c7fe-444a-9b23-fb17f92b76cf 00:21:11.418 [2024-07-14 21:20:22.826289] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:11.418 [2024-07-14 21:20:22.826301] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:11.418 [2024-07-14 21:20:22.826317] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:11.418 [2024-07-14 21:20:22.826329] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:11.418 [2024-07-14 21:20:22.826342] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:11.418 [2024-07-14 21:20:22.826354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:11.418 [2024-07-14 21:20:22.826368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:11.418 [2024-07-14 21:20:22.826379] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:11.418 [2024-07-14 21:20:22.826391] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:11.418 [2024-07-14 21:20:22.826403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.418 [2024-07-14 21:20:22.826418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:11.418 [2024-07-14 21:20:22.826431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.695 ms 00:21:11.418 [2024-07-14 21:20:22.826445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.418 [2024-07-14 21:20:22.843085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.418 [2024-07-14 21:20:22.843141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:11.418 [2024-07-14 21:20:22.843158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.574 ms 00:21:11.418 [2024-07-14 21:20:22.843172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.418 [2024-07-14 21:20:22.843585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.418 [2024-07-14 21:20:22.843613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:11.418 [2024-07-14 21:20:22.843626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:21:11.418 [2024-07-14 21:20:22.843641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.418 [2024-07-14 21:20:22.893726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.418 [2024-07-14 21:20:22.893816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:11.418 [2024-07-14 21:20:22.893868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.418 [2024-07-14 21:20:22.893885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.418 [2024-07-14 21:20:22.893967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.418 [2024-07-14 21:20:22.893987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:11.418 [2024-07-14 21:20:22.894000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.418 [2024-07-14 21:20:22.894018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.418 [2024-07-14 21:20:22.894132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.418 [2024-07-14 21:20:22.894158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:11.418 [2024-07-14 21:20:22.894172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.418 [2024-07-14 21:20:22.894186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.418 [2024-07-14 21:20:22.894243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.418 [2024-07-14 21:20:22.894262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:11.418 [2024-07-14 21:20:22.894274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.418 [2024-07-14 21:20:22.894286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.689 [2024-07-14 21:20:22.991980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.689 [2024-07-14 21:20:22.992058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:11.689 [2024-07-14 21:20:22.992078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.689 [2024-07-14 21:20:22.992094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.689 [2024-07-14 21:20:23.074162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.689 [2024-07-14 21:20:23.074240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:11.689 [2024-07-14 21:20:23.074291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.689 [2024-07-14 21:20:23.074310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.689 [2024-07-14 21:20:23.074420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.689 [2024-07-14 21:20:23.074444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:11.689 [2024-07-14 21:20:23.074459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.689 [2024-07-14 21:20:23.074472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.689 [2024-07-14 21:20:23.074536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.689 [2024-07-14 21:20:23.074562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:11.689 [2024-07-14 21:20:23.074576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.689 [2024-07-14 21:20:23.074589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.689 [2024-07-14 21:20:23.074721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.689 [2024-07-14 21:20:23.074775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:11.689 [2024-07-14 21:20:23.074788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.689 [2024-07-14 21:20:23.074817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.689 [2024-07-14 21:20:23.074903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.689 [2024-07-14 21:20:23.074946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:11.689 [2024-07-14 21:20:23.074962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.689 [2024-07-14 21:20:23.074976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.689 [2024-07-14 21:20:23.075033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.689 [2024-07-14 21:20:23.075053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:11.689 [2024-07-14 21:20:23.075067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.689 [2024-07-14 21:20:23.075081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.689 [2024-07-14 21:20:23.075139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.689 [2024-07-14 21:20:23.075164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:11.689 [2024-07-14 21:20:23.075178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.689 [2024-07-14 21:20:23.075193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.689 [2024-07-14 21:20:23.075396] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 442.231 ms, result 0 00:21:11.689 true 00:21:11.689 21:20:23 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 80850 00:21:11.689 21:20:23 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 80850 ']' 00:21:11.689 21:20:23 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 80850 00:21:11.689 21:20:23 ftl.ftl_restore -- common/autotest_common.sh@953 -- # uname 00:21:11.689 21:20:23 ftl.ftl_restore -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:11.689 21:20:23 ftl.ftl_restore -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80850 00:21:11.689 21:20:23 ftl.ftl_restore -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:11.689 21:20:23 ftl.ftl_restore -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:11.689 killing process with pid 80850 00:21:11.689 21:20:23 ftl.ftl_restore -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80850' 00:21:11.689 21:20:23 ftl.ftl_restore -- common/autotest_common.sh@967 -- # kill 80850 00:21:11.689 21:20:23 ftl.ftl_restore -- common/autotest_common.sh@972 -- # wait 80850 00:21:16.959 21:20:27 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:21.146 262144+0 records in 00:21:21.146 262144+0 records out 00:21:21.146 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.43302 s, 242 MB/s 00:21:21.146 21:20:32 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:23.048 21:20:34 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:23.048 [2024-07-14 21:20:34.465186] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:23.048 [2024-07-14 21:20:34.465366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81081 ] 00:21:23.306 [2024-07-14 21:20:34.637120] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.565 [2024-07-14 21:20:34.857687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.825 [2024-07-14 21:20:35.163304] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:23.825 [2024-07-14 21:20:35.163387] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:23.825 [2024-07-14 21:20:35.323528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.825 [2024-07-14 21:20:35.323588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:23.825 [2024-07-14 21:20:35.323618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:23.825 [2024-07-14 21:20:35.323637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.825 [2024-07-14 21:20:35.323738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.825 [2024-07-14 21:20:35.323769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:23.825 [2024-07-14 21:20:35.323790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:21:23.825 [2024-07-14 21:20:35.323852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.825 [2024-07-14 21:20:35.323922] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:23.825 [2024-07-14 21:20:35.325224] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:23.825 [2024-07-14 21:20:35.325274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.825 [2024-07-14 21:20:35.325306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:23.825 [2024-07-14 21:20:35.325327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.363 ms 00:21:23.825 [2024-07-14 21:20:35.325345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.825 [2024-07-14 21:20:35.326645] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:23.825 [2024-07-14 21:20:35.343506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.825 [2024-07-14 21:20:35.343558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:23.825 [2024-07-14 21:20:35.343586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.862 ms 00:21:23.825 [2024-07-14 21:20:35.343605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.825 [2024-07-14 21:20:35.343704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.825 [2024-07-14 21:20:35.343733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:23.825 [2024-07-14 21:20:35.343758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:23.825 [2024-07-14 21:20:35.343777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.825 [2024-07-14 21:20:35.348770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.825 [2024-07-14 21:20:35.348872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:23.825 [2024-07-14 21:20:35.348899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.808 ms 00:21:23.825 [2024-07-14 21:20:35.348920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.825 [2024-07-14 21:20:35.349245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.825 [2024-07-14 21:20:35.349292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:23.825 [2024-07-14 21:20:35.349316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:21:23.825 [2024-07-14 21:20:35.349337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.825 [2024-07-14 21:20:35.349443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.825 [2024-07-14 21:20:35.349472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:23.825 [2024-07-14 21:20:35.349495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:23.825 [2024-07-14 21:20:35.349513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.825 [2024-07-14 21:20:35.349569] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:23.825 [2024-07-14 21:20:35.354164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.825 [2024-07-14 21:20:35.354211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:23.825 [2024-07-14 21:20:35.354238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.609 ms 00:21:23.825 [2024-07-14 21:20:35.354275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.825 [2024-07-14 21:20:35.354370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.825 [2024-07-14 21:20:35.354401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:23.825 [2024-07-14 21:20:35.354427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:23.825 [2024-07-14 21:20:35.354448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.825 [2024-07-14 21:20:35.354555] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:23.825 [2024-07-14 21:20:35.354615] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:23.825 [2024-07-14 21:20:35.354680] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:23.826 [2024-07-14 21:20:35.354724] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:23.826 [2024-07-14 21:20:35.354875] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:23.826 [2024-07-14 21:20:35.354909] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:23.826 [2024-07-14 21:20:35.354935] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:23.826 [2024-07-14 21:20:35.354963] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:23.826 [2024-07-14 21:20:35.354987] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:23.826 [2024-07-14 21:20:35.355010] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:23.826 [2024-07-14 21:20:35.355029] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:23.826 [2024-07-14 21:20:35.355048] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:23.826 [2024-07-14 21:20:35.355067] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:23.826 [2024-07-14 21:20:35.355089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.826 [2024-07-14 21:20:35.355116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:23.826 [2024-07-14 21:20:35.355139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:21:23.826 [2024-07-14 21:20:35.355160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.826 [2024-07-14 21:20:35.355318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.826 [2024-07-14 21:20:35.355345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:23.826 [2024-07-14 21:20:35.355367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:21:23.826 [2024-07-14 21:20:35.355387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.826 [2024-07-14 21:20:35.355526] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:23.826 [2024-07-14 21:20:35.355556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:23.826 [2024-07-14 21:20:35.355587] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:23.826 [2024-07-14 21:20:35.355607] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.826 [2024-07-14 21:20:35.355627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:23.826 [2024-07-14 21:20:35.355646] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:23.826 [2024-07-14 21:20:35.355681] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:23.826 [2024-07-14 21:20:35.355715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:23.826 [2024-07-14 21:20:35.355733] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:23.826 [2024-07-14 21:20:35.355752] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:23.826 [2024-07-14 21:20:35.355771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:23.826 [2024-07-14 21:20:35.355789] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:23.826 [2024-07-14 21:20:35.355807] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:23.826 [2024-07-14 21:20:35.355825] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:23.826 [2024-07-14 21:20:35.355843] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:23.826 [2024-07-14 21:20:35.355879] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.826 [2024-07-14 21:20:35.355899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:23.826 [2024-07-14 21:20:35.355919] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:23.826 [2024-07-14 21:20:35.355936] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.826 [2024-07-14 21:20:35.355953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:23.826 [2024-07-14 21:20:35.355988] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:23.826 [2024-07-14 21:20:35.356006] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.826 [2024-07-14 21:20:35.356025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:23.826 [2024-07-14 21:20:35.356047] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:23.826 [2024-07-14 21:20:35.356065] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.826 [2024-07-14 21:20:35.356083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:23.826 [2024-07-14 21:20:35.356101] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:23.826 [2024-07-14 21:20:35.356119] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.826 [2024-07-14 21:20:35.356137] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:23.826 [2024-07-14 21:20:35.356156] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:23.826 [2024-07-14 21:20:35.356173] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.826 [2024-07-14 21:20:35.356191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:23.826 [2024-07-14 21:20:35.356209] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:23.826 [2024-07-14 21:20:35.356226] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:23.826 [2024-07-14 21:20:35.356244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:23.826 [2024-07-14 21:20:35.356262] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:23.826 [2024-07-14 21:20:35.356280] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:23.826 [2024-07-14 21:20:35.356297] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:23.826 [2024-07-14 21:20:35.356315] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:23.826 [2024-07-14 21:20:35.356335] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.826 [2024-07-14 21:20:35.356353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:23.826 [2024-07-14 21:20:35.356371] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:23.826 [2024-07-14 21:20:35.356399] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.826 [2024-07-14 21:20:35.356438] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:23.826 [2024-07-14 21:20:35.356459] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:23.826 [2024-07-14 21:20:35.356479] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:23.826 [2024-07-14 21:20:35.356500] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.826 [2024-07-14 21:20:35.356520] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:23.826 [2024-07-14 21:20:35.356546] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:23.826 [2024-07-14 21:20:35.356567] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:23.826 [2024-07-14 21:20:35.356590] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:23.826 [2024-07-14 21:20:35.356609] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:23.826 [2024-07-14 21:20:35.356629] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:23.826 [2024-07-14 21:20:35.356651] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:23.826 [2024-07-14 21:20:35.356677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:23.826 [2024-07-14 21:20:35.356702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:23.826 [2024-07-14 21:20:35.356737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:23.826 [2024-07-14 21:20:35.356787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:23.826 [2024-07-14 21:20:35.356818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:23.826 [2024-07-14 21:20:35.356848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:23.826 [2024-07-14 21:20:35.356871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:23.826 [2024-07-14 21:20:35.356950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:23.826 [2024-07-14 21:20:35.356990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:23.826 [2024-07-14 21:20:35.357012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:23.826 [2024-07-14 21:20:35.357032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:23.826 [2024-07-14 21:20:35.357052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:23.826 [2024-07-14 21:20:35.357073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:23.826 [2024-07-14 21:20:35.357094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:23.826 [2024-07-14 21:20:35.357114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:23.826 [2024-07-14 21:20:35.357135] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:23.826 [2024-07-14 21:20:35.357157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:23.826 [2024-07-14 21:20:35.357195] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:23.826 [2024-07-14 21:20:35.357217] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:23.826 [2024-07-14 21:20:35.357253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:23.826 [2024-07-14 21:20:35.357288] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:23.826 [2024-07-14 21:20:35.357325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.826 [2024-07-14 21:20:35.357354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:23.826 [2024-07-14 21:20:35.357375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.875 ms 00:21:23.826 [2024-07-14 21:20:35.357395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.086 [2024-07-14 21:20:35.397741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.086 [2024-07-14 21:20:35.397851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:24.086 [2024-07-14 21:20:35.397886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.252 ms 00:21:24.086 [2024-07-14 21:20:35.397905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.086 [2024-07-14 21:20:35.398046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.086 [2024-07-14 21:20:35.398075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:24.086 [2024-07-14 21:20:35.398146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:24.086 [2024-07-14 21:20:35.398167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.086 [2024-07-14 21:20:35.436847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.086 [2024-07-14 21:20:35.436943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:24.086 [2024-07-14 21:20:35.436975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.550 ms 00:21:24.086 [2024-07-14 21:20:35.436993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.086 [2024-07-14 21:20:35.437089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.086 [2024-07-14 21:20:35.437114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:24.086 [2024-07-14 21:20:35.437133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:24.086 [2024-07-14 21:20:35.437148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.086 [2024-07-14 21:20:35.437672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.086 [2024-07-14 21:20:35.437748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:24.086 [2024-07-14 21:20:35.437773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:21:24.086 [2024-07-14 21:20:35.437794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.086 [2024-07-14 21:20:35.438102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.086 [2024-07-14 21:20:35.438153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:24.086 [2024-07-14 21:20:35.438181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:21:24.086 [2024-07-14 21:20:35.438219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.086 [2024-07-14 21:20:35.454277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.086 [2024-07-14 21:20:35.454342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:24.086 [2024-07-14 21:20:35.454369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.011 ms 00:21:24.086 [2024-07-14 21:20:35.454388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.086 [2024-07-14 21:20:35.470536] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:24.086 [2024-07-14 21:20:35.470602] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:24.086 [2024-07-14 21:20:35.470637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.086 [2024-07-14 21:20:35.470657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:24.086 [2024-07-14 21:20:35.470675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.023 ms 00:21:24.087 [2024-07-14 21:20:35.470692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.087 [2024-07-14 21:20:35.500347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.087 [2024-07-14 21:20:35.500441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:24.087 [2024-07-14 21:20:35.500472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.594 ms 00:21:24.087 [2024-07-14 21:20:35.500493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.087 [2024-07-14 21:20:35.516593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.087 [2024-07-14 21:20:35.516648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:24.087 [2024-07-14 21:20:35.516677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.010 ms 00:21:24.087 [2024-07-14 21:20:35.516703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.087 [2024-07-14 21:20:35.531969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.087 [2024-07-14 21:20:35.532014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:24.087 [2024-07-14 21:20:35.532040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.200 ms 00:21:24.087 [2024-07-14 21:20:35.532058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.087 [2024-07-14 21:20:35.532997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.087 [2024-07-14 21:20:35.533061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:24.087 [2024-07-14 21:20:35.533087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 00:21:24.087 [2024-07-14 21:20:35.533107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.087 [2024-07-14 21:20:35.604214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.087 [2024-07-14 21:20:35.604288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:24.087 [2024-07-14 21:20:35.604319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.066 ms 00:21:24.087 [2024-07-14 21:20:35.604338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.087 [2024-07-14 21:20:35.617001] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:24.087 [2024-07-14 21:20:35.619599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.087 [2024-07-14 21:20:35.619644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:24.087 [2024-07-14 21:20:35.619672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.143 ms 00:21:24.087 [2024-07-14 21:20:35.619691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.087 [2024-07-14 21:20:35.619870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.087 [2024-07-14 21:20:35.619902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:24.087 [2024-07-14 21:20:35.619941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:24.087 [2024-07-14 21:20:35.619959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.087 [2024-07-14 21:20:35.620086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.087 [2024-07-14 21:20:35.620149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:24.087 [2024-07-14 21:20:35.620180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:21:24.087 [2024-07-14 21:20:35.620201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.087 [2024-07-14 21:20:35.620279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.087 [2024-07-14 21:20:35.620315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:24.087 [2024-07-14 21:20:35.620340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:24.087 [2024-07-14 21:20:35.620359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.087 [2024-07-14 21:20:35.620451] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:24.087 [2024-07-14 21:20:35.620482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.087 [2024-07-14 21:20:35.620504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:24.087 [2024-07-14 21:20:35.620526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:24.087 [2024-07-14 21:20:35.620569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.346 [2024-07-14 21:20:35.651834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.346 [2024-07-14 21:20:35.651887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:24.346 [2024-07-14 21:20:35.651913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.218 ms 00:21:24.346 [2024-07-14 21:20:35.651932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.346 [2024-07-14 21:20:35.652027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.346 [2024-07-14 21:20:35.652056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:24.346 [2024-07-14 21:20:35.652089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:24.346 [2024-07-14 21:20:35.652108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.346 [2024-07-14 21:20:35.653524] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 329.454 ms, result 0 00:22:05.408  Copying: 24/1024 [MB] (24 MBps) Copying: 48/1024 [MB] (24 MBps) Copying: 73/1024 [MB] (24 MBps) Copying: 98/1024 [MB] (25 MBps) Copying: 123/1024 [MB] (25 MBps) Copying: 149/1024 [MB] (25 MBps) Copying: 175/1024 [MB] (26 MBps) Copying: 199/1024 [MB] (24 MBps) Copying: 223/1024 [MB] (23 MBps) Copying: 248/1024 [MB] (25 MBps) Copying: 273/1024 [MB] (24 MBps) Copying: 297/1024 [MB] (24 MBps) Copying: 322/1024 [MB] (24 MBps) Copying: 346/1024 [MB] (24 MBps) Copying: 371/1024 [MB] (24 MBps) Copying: 396/1024 [MB] (24 MBps) Copying: 420/1024 [MB] (24 MBps) Copying: 444/1024 [MB] (24 MBps) Copying: 469/1024 [MB] (24 MBps) Copying: 493/1024 [MB] (24 MBps) Copying: 519/1024 [MB] (25 MBps) Copying: 543/1024 [MB] (24 MBps) Copying: 569/1024 [MB] (25 MBps) Copying: 594/1024 [MB] (25 MBps) Copying: 620/1024 [MB] (25 MBps) Copying: 645/1024 [MB] (25 MBps) Copying: 671/1024 [MB] (25 MBps) Copying: 696/1024 [MB] (25 MBps) Copying: 721/1024 [MB] (24 MBps) Copying: 745/1024 [MB] (24 MBps) Copying: 771/1024 [MB] (25 MBps) Copying: 797/1024 [MB] (26 MBps) Copying: 823/1024 [MB] (25 MBps) Copying: 847/1024 [MB] (23 MBps) Copying: 871/1024 [MB] (24 MBps) Copying: 897/1024 [MB] (25 MBps) Copying: 923/1024 [MB] (25 MBps) Copying: 948/1024 [MB] (25 MBps) Copying: 973/1024 [MB] (24 MBps) Copying: 997/1024 [MB] (23 MBps) Copying: 1021/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-14 21:21:16.762974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.408 [2024-07-14 21:21:16.763218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:05.408 [2024-07-14 21:21:16.763375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:05.408 [2024-07-14 21:21:16.763429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.408 [2024-07-14 21:21:16.763568] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:05.408 [2024-07-14 21:21:16.766934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.408 [2024-07-14 21:21:16.766972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:05.408 [2024-07-14 21:21:16.766989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.179 ms 00:22:05.408 [2024-07-14 21:21:16.767000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.409 [2024-07-14 21:21:16.768560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.409 [2024-07-14 21:21:16.768607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:05.409 [2024-07-14 21:21:16.768634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.531 ms 00:22:05.409 [2024-07-14 21:21:16.768647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.409 [2024-07-14 21:21:16.784169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.409 [2024-07-14 21:21:16.784212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:05.409 [2024-07-14 21:21:16.784245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.500 ms 00:22:05.409 [2024-07-14 21:21:16.784256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.409 [2024-07-14 21:21:16.790214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.409 [2024-07-14 21:21:16.790254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:05.409 [2024-07-14 21:21:16.790293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.921 ms 00:22:05.409 [2024-07-14 21:21:16.790305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.409 [2024-07-14 21:21:16.817284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.409 [2024-07-14 21:21:16.817339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:05.409 [2024-07-14 21:21:16.817372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.899 ms 00:22:05.409 [2024-07-14 21:21:16.817382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.409 [2024-07-14 21:21:16.833372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.409 [2024-07-14 21:21:16.833412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:05.409 [2024-07-14 21:21:16.833444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.950 ms 00:22:05.409 [2024-07-14 21:21:16.833455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.409 [2024-07-14 21:21:16.833640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.409 [2024-07-14 21:21:16.833661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:05.409 [2024-07-14 21:21:16.833690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:22:05.409 [2024-07-14 21:21:16.833701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.409 [2024-07-14 21:21:16.862897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.409 [2024-07-14 21:21:16.862946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:05.409 [2024-07-14 21:21:16.862964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.168 ms 00:22:05.409 [2024-07-14 21:21:16.862975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.409 [2024-07-14 21:21:16.890860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.409 [2024-07-14 21:21:16.890909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:05.409 [2024-07-14 21:21:16.890942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.835 ms 00:22:05.409 [2024-07-14 21:21:16.890952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.409 [2024-07-14 21:21:16.917911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.409 [2024-07-14 21:21:16.917967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:05.409 [2024-07-14 21:21:16.917999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.917 ms 00:22:05.409 [2024-07-14 21:21:16.918023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.409 [2024-07-14 21:21:16.943158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.409 [2024-07-14 21:21:16.943196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:05.409 [2024-07-14 21:21:16.943227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.055 ms 00:22:05.409 [2024-07-14 21:21:16.943237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.409 [2024-07-14 21:21:16.943275] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:05.409 [2024-07-14 21:21:16.943296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:05.409 [2024-07-14 21:21:16.943858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.943871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.943881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.943891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.943902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.943912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.943922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.943948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.943959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.943969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.943980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.943990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:05.410 [2024-07-14 21:21:16.944814] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:05.410 [2024-07-14 21:21:16.944849] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 183c2ef8-c7fe-444a-9b23-fb17f92b76cf 00:22:05.410 [2024-07-14 21:21:16.944873] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:05.410 [2024-07-14 21:21:16.944892] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:05.410 [2024-07-14 21:21:16.944909] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:05.410 [2024-07-14 21:21:16.944930] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:05.410 [2024-07-14 21:21:16.944941] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:05.410 [2024-07-14 21:21:16.944952] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:05.410 [2024-07-14 21:21:16.944962] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:05.411 [2024-07-14 21:21:16.944971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:05.411 [2024-07-14 21:21:16.944980] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:05.411 [2024-07-14 21:21:16.944992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.411 [2024-07-14 21:21:16.945003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:05.411 [2024-07-14 21:21:16.945014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.718 ms 00:22:05.411 [2024-07-14 21:21:16.945024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.670 [2024-07-14 21:21:16.960158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.670 [2024-07-14 21:21:16.960202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:05.670 [2024-07-14 21:21:16.960234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.085 ms 00:22:05.670 [2024-07-14 21:21:16.960256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.670 [2024-07-14 21:21:16.960685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.670 [2024-07-14 21:21:16.960732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:05.670 [2024-07-14 21:21:16.960759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:22:05.670 [2024-07-14 21:21:16.960769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.670 [2024-07-14 21:21:16.991149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.670 [2024-07-14 21:21:16.991210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:05.670 [2024-07-14 21:21:16.991243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.670 [2024-07-14 21:21:16.991253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.670 [2024-07-14 21:21:16.991326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.670 [2024-07-14 21:21:16.991340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:05.670 [2024-07-14 21:21:16.991350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.670 [2024-07-14 21:21:16.991360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.670 [2024-07-14 21:21:16.991443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.670 [2024-07-14 21:21:16.991466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:05.670 [2024-07-14 21:21:16.991476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.670 [2024-07-14 21:21:16.991486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.670 [2024-07-14 21:21:16.991505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.670 [2024-07-14 21:21:16.991518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:05.670 [2024-07-14 21:21:16.991528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.670 [2024-07-14 21:21:16.991537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.670 [2024-07-14 21:21:17.082012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.670 [2024-07-14 21:21:17.082090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:05.670 [2024-07-14 21:21:17.082110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.670 [2024-07-14 21:21:17.082122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.670 [2024-07-14 21:21:17.158519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.670 [2024-07-14 21:21:17.158587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:05.670 [2024-07-14 21:21:17.158605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.670 [2024-07-14 21:21:17.158615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.670 [2024-07-14 21:21:17.158689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.670 [2024-07-14 21:21:17.158706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:05.670 [2024-07-14 21:21:17.158717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.670 [2024-07-14 21:21:17.158734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.670 [2024-07-14 21:21:17.158775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.670 [2024-07-14 21:21:17.158788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:05.670 [2024-07-14 21:21:17.158835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.670 [2024-07-14 21:21:17.158849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.670 [2024-07-14 21:21:17.158964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.670 [2024-07-14 21:21:17.158983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:05.670 [2024-07-14 21:21:17.158995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.670 [2024-07-14 21:21:17.159011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.670 [2024-07-14 21:21:17.159056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.670 [2024-07-14 21:21:17.159072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:05.670 [2024-07-14 21:21:17.159084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.670 [2024-07-14 21:21:17.159094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.670 [2024-07-14 21:21:17.159139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.670 [2024-07-14 21:21:17.159194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:05.670 [2024-07-14 21:21:17.159213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.670 [2024-07-14 21:21:17.159232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.670 [2024-07-14 21:21:17.159290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.670 [2024-07-14 21:21:17.159307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:05.670 [2024-07-14 21:21:17.159318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.670 [2024-07-14 21:21:17.159333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.670 [2024-07-14 21:21:17.159494] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 396.482 ms, result 0 00:22:07.059 00:22:07.059 00:22:07.059 21:21:18 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:07.318 [2024-07-14 21:21:18.671166] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:07.318 [2024-07-14 21:21:18.671350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81521 ] 00:22:07.318 [2024-07-14 21:21:18.841382] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.576 [2024-07-14 21:21:19.008270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.835 [2024-07-14 21:21:19.298635] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:07.835 [2024-07-14 21:21:19.298705] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:08.095 [2024-07-14 21:21:19.457989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.095 [2024-07-14 21:21:19.458062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:08.095 [2024-07-14 21:21:19.458083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:08.095 [2024-07-14 21:21:19.458096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.095 [2024-07-14 21:21:19.458186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.095 [2024-07-14 21:21:19.458207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:08.095 [2024-07-14 21:21:19.458220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:22:08.095 [2024-07-14 21:21:19.458235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.095 [2024-07-14 21:21:19.458266] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:08.095 [2024-07-14 21:21:19.459266] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:08.095 [2024-07-14 21:21:19.459314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.095 [2024-07-14 21:21:19.459334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:08.095 [2024-07-14 21:21:19.459348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.054 ms 00:22:08.095 [2024-07-14 21:21:19.459359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.095 [2024-07-14 21:21:19.460638] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:08.095 [2024-07-14 21:21:19.477490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.095 [2024-07-14 21:21:19.477536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:08.095 [2024-07-14 21:21:19.477555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.854 ms 00:22:08.095 [2024-07-14 21:21:19.477566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.095 [2024-07-14 21:21:19.477641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.095 [2024-07-14 21:21:19.477659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:08.095 [2024-07-14 21:21:19.477675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:08.095 [2024-07-14 21:21:19.477686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.095 [2024-07-14 21:21:19.482470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.095 [2024-07-14 21:21:19.482519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:08.095 [2024-07-14 21:21:19.482535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.697 ms 00:22:08.095 [2024-07-14 21:21:19.482546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.095 [2024-07-14 21:21:19.482641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.095 [2024-07-14 21:21:19.482662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:08.095 [2024-07-14 21:21:19.482674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:08.095 [2024-07-14 21:21:19.482685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.095 [2024-07-14 21:21:19.482747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.095 [2024-07-14 21:21:19.482764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:08.095 [2024-07-14 21:21:19.482793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:08.095 [2024-07-14 21:21:19.482820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.095 [2024-07-14 21:21:19.482899] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:08.095 [2024-07-14 21:21:19.487276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.095 [2024-07-14 21:21:19.487314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:08.095 [2024-07-14 21:21:19.487329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.387 ms 00:22:08.095 [2024-07-14 21:21:19.487339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.095 [2024-07-14 21:21:19.487390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.095 [2024-07-14 21:21:19.487407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:08.095 [2024-07-14 21:21:19.487419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:08.095 [2024-07-14 21:21:19.487429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.095 [2024-07-14 21:21:19.487471] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:08.095 [2024-07-14 21:21:19.487500] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:08.095 [2024-07-14 21:21:19.487539] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:08.095 [2024-07-14 21:21:19.487561] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:08.095 [2024-07-14 21:21:19.487656] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:08.095 [2024-07-14 21:21:19.487671] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:08.095 [2024-07-14 21:21:19.487685] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:08.095 [2024-07-14 21:21:19.487699] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:08.095 [2024-07-14 21:21:19.487711] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:08.095 [2024-07-14 21:21:19.487723] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:08.095 [2024-07-14 21:21:19.487734] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:08.095 [2024-07-14 21:21:19.487744] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:08.095 [2024-07-14 21:21:19.487754] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:08.095 [2024-07-14 21:21:19.487766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.095 [2024-07-14 21:21:19.487799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:08.095 [2024-07-14 21:21:19.487830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:22:08.095 [2024-07-14 21:21:19.487862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.095 [2024-07-14 21:21:19.487947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.095 [2024-07-14 21:21:19.487962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:08.095 [2024-07-14 21:21:19.487974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:22:08.095 [2024-07-14 21:21:19.487985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.095 [2024-07-14 21:21:19.488119] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:08.095 [2024-07-14 21:21:19.488176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:08.095 [2024-07-14 21:21:19.488204] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:08.095 [2024-07-14 21:21:19.488240] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.095 [2024-07-14 21:21:19.488261] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:08.095 [2024-07-14 21:21:19.488279] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:08.095 [2024-07-14 21:21:19.488300] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:08.095 [2024-07-14 21:21:19.488312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:08.095 [2024-07-14 21:21:19.488324] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:08.095 [2024-07-14 21:21:19.488335] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:08.096 [2024-07-14 21:21:19.488346] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:08.096 [2024-07-14 21:21:19.488357] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:08.096 [2024-07-14 21:21:19.488368] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:08.096 [2024-07-14 21:21:19.488379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:08.096 [2024-07-14 21:21:19.488390] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:08.096 [2024-07-14 21:21:19.488437] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.096 [2024-07-14 21:21:19.488456] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:08.096 [2024-07-14 21:21:19.488471] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:08.096 [2024-07-14 21:21:19.488490] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.096 [2024-07-14 21:21:19.488510] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:08.096 [2024-07-14 21:21:19.488546] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:08.096 [2024-07-14 21:21:19.488564] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:08.096 [2024-07-14 21:21:19.488586] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:08.096 [2024-07-14 21:21:19.488600] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:08.096 [2024-07-14 21:21:19.488630] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:08.096 [2024-07-14 21:21:19.488646] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:08.096 [2024-07-14 21:21:19.488657] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:08.096 [2024-07-14 21:21:19.488670] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:08.096 [2024-07-14 21:21:19.488682] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:08.096 [2024-07-14 21:21:19.488693] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:08.096 [2024-07-14 21:21:19.488704] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:08.096 [2024-07-14 21:21:19.488726] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:08.096 [2024-07-14 21:21:19.488737] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:08.096 [2024-07-14 21:21:19.488748] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:08.096 [2024-07-14 21:21:19.488759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:08.096 [2024-07-14 21:21:19.488770] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:08.096 [2024-07-14 21:21:19.488792] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:08.096 [2024-07-14 21:21:19.488803] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:08.096 [2024-07-14 21:21:19.488814] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:08.096 [2024-07-14 21:21:19.488825] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.096 [2024-07-14 21:21:19.488853] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:08.096 [2024-07-14 21:21:19.488866] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:08.096 [2024-07-14 21:21:19.488878] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.096 [2024-07-14 21:21:19.488897] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:08.096 [2024-07-14 21:21:19.488919] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:08.096 [2024-07-14 21:21:19.488941] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:08.096 [2024-07-14 21:21:19.488957] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.096 [2024-07-14 21:21:19.488969] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:08.096 [2024-07-14 21:21:19.488982] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:08.096 [2024-07-14 21:21:19.488993] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:08.096 [2024-07-14 21:21:19.489004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:08.096 [2024-07-14 21:21:19.489015] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:08.096 [2024-07-14 21:21:19.489027] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:08.096 [2024-07-14 21:21:19.489040] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:08.096 [2024-07-14 21:21:19.489056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:08.096 [2024-07-14 21:21:19.489070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:08.096 [2024-07-14 21:21:19.489083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:08.096 [2024-07-14 21:21:19.489104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:08.096 [2024-07-14 21:21:19.489141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:08.096 [2024-07-14 21:21:19.489161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:08.096 [2024-07-14 21:21:19.489175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:08.096 [2024-07-14 21:21:19.489187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:08.096 [2024-07-14 21:21:19.489198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:08.096 [2024-07-14 21:21:19.489210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:08.096 [2024-07-14 21:21:19.489222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:08.096 [2024-07-14 21:21:19.489234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:08.096 [2024-07-14 21:21:19.489246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:08.096 [2024-07-14 21:21:19.489257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:08.096 [2024-07-14 21:21:19.489269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:08.096 [2024-07-14 21:21:19.489300] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:08.096 [2024-07-14 21:21:19.489323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:08.096 [2024-07-14 21:21:19.489347] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:08.096 [2024-07-14 21:21:19.489368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:08.096 [2024-07-14 21:21:19.489381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:08.096 [2024-07-14 21:21:19.489394] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:08.096 [2024-07-14 21:21:19.489408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.096 [2024-07-14 21:21:19.489427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:08.096 [2024-07-14 21:21:19.489448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.350 ms 00:22:08.096 [2024-07-14 21:21:19.489470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.096 [2024-07-14 21:21:19.527836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.096 [2024-07-14 21:21:19.527900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:08.096 [2024-07-14 21:21:19.527932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.288 ms 00:22:08.096 [2024-07-14 21:21:19.527945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.096 [2024-07-14 21:21:19.528061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.096 [2024-07-14 21:21:19.528077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:08.096 [2024-07-14 21:21:19.528090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:08.096 [2024-07-14 21:21:19.528100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.096 [2024-07-14 21:21:19.564139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.096 [2024-07-14 21:21:19.564230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:08.096 [2024-07-14 21:21:19.564249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.897 ms 00:22:08.096 [2024-07-14 21:21:19.564261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.096 [2024-07-14 21:21:19.564330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.096 [2024-07-14 21:21:19.564346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:08.096 [2024-07-14 21:21:19.564360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:08.096 [2024-07-14 21:21:19.564371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.096 [2024-07-14 21:21:19.564832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.096 [2024-07-14 21:21:19.564867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:08.096 [2024-07-14 21:21:19.564899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:22:08.096 [2024-07-14 21:21:19.564911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.096 [2024-07-14 21:21:19.565117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.096 [2024-07-14 21:21:19.565150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:08.096 [2024-07-14 21:21:19.565164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:22:08.096 [2024-07-14 21:21:19.565176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.096 [2024-07-14 21:21:19.580602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.096 [2024-07-14 21:21:19.580654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:08.096 [2024-07-14 21:21:19.580673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.394 ms 00:22:08.096 [2024-07-14 21:21:19.580686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.096 [2024-07-14 21:21:19.597619] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:08.096 [2024-07-14 21:21:19.597686] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:08.096 [2024-07-14 21:21:19.597707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.096 [2024-07-14 21:21:19.597719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:08.096 [2024-07-14 21:21:19.597734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.795 ms 00:22:08.096 [2024-07-14 21:21:19.597744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.096 [2024-07-14 21:21:19.626367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.097 [2024-07-14 21:21:19.626436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:08.097 [2024-07-14 21:21:19.626473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.488 ms 00:22:08.097 [2024-07-14 21:21:19.626494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.356 [2024-07-14 21:21:19.643011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.356 [2024-07-14 21:21:19.643065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:08.356 [2024-07-14 21:21:19.643084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.480 ms 00:22:08.356 [2024-07-14 21:21:19.643097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.356 [2024-07-14 21:21:19.659142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.356 [2024-07-14 21:21:19.659219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:08.356 [2024-07-14 21:21:19.659237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.989 ms 00:22:08.356 [2024-07-14 21:21:19.659247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.356 [2024-07-14 21:21:19.660113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.356 [2024-07-14 21:21:19.660152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:08.356 [2024-07-14 21:21:19.660169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:22:08.356 [2024-07-14 21:21:19.660196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.356 [2024-07-14 21:21:19.729312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.356 [2024-07-14 21:21:19.729403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:08.356 [2024-07-14 21:21:19.729425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.058 ms 00:22:08.356 [2024-07-14 21:21:19.729437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.356 [2024-07-14 21:21:19.742370] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:08.356 [2024-07-14 21:21:19.745275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.356 [2024-07-14 21:21:19.745330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:08.356 [2024-07-14 21:21:19.745365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.750 ms 00:22:08.356 [2024-07-14 21:21:19.745377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.356 [2024-07-14 21:21:19.745494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.356 [2024-07-14 21:21:19.745514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:08.356 [2024-07-14 21:21:19.745526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:08.356 [2024-07-14 21:21:19.745538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.356 [2024-07-14 21:21:19.745620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.356 [2024-07-14 21:21:19.745643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:08.356 [2024-07-14 21:21:19.745656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:08.356 [2024-07-14 21:21:19.745667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.356 [2024-07-14 21:21:19.745720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.356 [2024-07-14 21:21:19.745736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:08.356 [2024-07-14 21:21:19.745748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:08.356 [2024-07-14 21:21:19.745759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.356 [2024-07-14 21:21:19.745851] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:08.356 [2024-07-14 21:21:19.745882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.356 [2024-07-14 21:21:19.745903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:08.356 [2024-07-14 21:21:19.745930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:08.356 [2024-07-14 21:21:19.745942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.356 [2024-07-14 21:21:19.778469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.356 [2024-07-14 21:21:19.778550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:08.356 [2024-07-14 21:21:19.778572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.492 ms 00:22:08.356 [2024-07-14 21:21:19.778586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.356 [2024-07-14 21:21:19.778714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.356 [2024-07-14 21:21:19.778746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:08.356 [2024-07-14 21:21:19.778761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:08.356 [2024-07-14 21:21:19.778773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.356 [2024-07-14 21:21:19.780054] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 321.532 ms, result 0 00:22:50.267  Copying: 25/1024 [MB] (25 MBps) Copying: 49/1024 [MB] (24 MBps) Copying: 74/1024 [MB] (25 MBps) Copying: 99/1024 [MB] (24 MBps) Copying: 124/1024 [MB] (24 MBps) Copying: 150/1024 [MB] (25 MBps) Copying: 174/1024 [MB] (24 MBps) Copying: 200/1024 [MB] (25 MBps) Copying: 225/1024 [MB] (25 MBps) Copying: 251/1024 [MB] (25 MBps) Copying: 276/1024 [MB] (25 MBps) Copying: 301/1024 [MB] (24 MBps) Copying: 326/1024 [MB] (24 MBps) Copying: 351/1024 [MB] (24 MBps) Copying: 376/1024 [MB] (24 MBps) Copying: 401/1024 [MB] (25 MBps) Copying: 426/1024 [MB] (25 MBps) Copying: 452/1024 [MB] (25 MBps) Copying: 478/1024 [MB] (26 MBps) Copying: 505/1024 [MB] (26 MBps) Copying: 531/1024 [MB] (26 MBps) Copying: 557/1024 [MB] (25 MBps) Copying: 582/1024 [MB] (25 MBps) Copying: 607/1024 [MB] (24 MBps) Copying: 632/1024 [MB] (25 MBps) Copying: 658/1024 [MB] (25 MBps) Copying: 682/1024 [MB] (24 MBps) Copying: 707/1024 [MB] (24 MBps) Copying: 732/1024 [MB] (25 MBps) Copying: 757/1024 [MB] (25 MBps) Copying: 782/1024 [MB] (24 MBps) Copying: 805/1024 [MB] (23 MBps) Copying: 828/1024 [MB] (23 MBps) Copying: 851/1024 [MB] (23 MBps) Copying: 875/1024 [MB] (24 MBps) Copying: 900/1024 [MB] (24 MBps) Copying: 924/1024 [MB] (24 MBps) Copying: 948/1024 [MB] (24 MBps) Copying: 972/1024 [MB] (23 MBps) Copying: 996/1024 [MB] (23 MBps) Copying: 1019/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-14 21:22:01.549035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.267 [2024-07-14 21:22:01.549113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:50.267 [2024-07-14 21:22:01.549167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:50.267 [2024-07-14 21:22:01.549179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.267 [2024-07-14 21:22:01.549224] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:50.267 [2024-07-14 21:22:01.553732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.267 [2024-07-14 21:22:01.553962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:50.267 [2024-07-14 21:22:01.554097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.469 ms 00:22:50.267 [2024-07-14 21:22:01.554212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.267 [2024-07-14 21:22:01.554715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.267 [2024-07-14 21:22:01.554902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:50.267 [2024-07-14 21:22:01.554929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:22:50.267 [2024-07-14 21:22:01.554942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.267 [2024-07-14 21:22:01.558752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.267 [2024-07-14 21:22:01.558956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:50.267 [2024-07-14 21:22:01.559080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.783 ms 00:22:50.267 [2024-07-14 21:22:01.559255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.267 [2024-07-14 21:22:01.566020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.267 [2024-07-14 21:22:01.566220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:50.267 [2024-07-14 21:22:01.566363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.693 ms 00:22:50.267 [2024-07-14 21:22:01.566507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.267 [2024-07-14 21:22:01.597081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.267 [2024-07-14 21:22:01.597344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:50.267 [2024-07-14 21:22:01.597482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.441 ms 00:22:50.268 [2024-07-14 21:22:01.597530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.268 [2024-07-14 21:22:01.613864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.268 [2024-07-14 21:22:01.614042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:50.268 [2024-07-14 21:22:01.614187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.169 ms 00:22:50.268 [2024-07-14 21:22:01.614309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.268 [2024-07-14 21:22:01.614476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.268 [2024-07-14 21:22:01.614545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:50.268 [2024-07-14 21:22:01.614653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:22:50.268 [2024-07-14 21:22:01.614772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.268 [2024-07-14 21:22:01.641386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.268 [2024-07-14 21:22:01.641597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:50.268 [2024-07-14 21:22:01.641733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.484 ms 00:22:50.268 [2024-07-14 21:22:01.641782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.268 [2024-07-14 21:22:01.667952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.268 [2024-07-14 21:22:01.668135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:50.268 [2024-07-14 21:22:01.668263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.994 ms 00:22:50.268 [2024-07-14 21:22:01.668429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.268 [2024-07-14 21:22:01.694234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.268 [2024-07-14 21:22:01.694420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:50.268 [2024-07-14 21:22:01.694561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.716 ms 00:22:50.268 [2024-07-14 21:22:01.694609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.268 [2024-07-14 21:22:01.720532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.268 [2024-07-14 21:22:01.720570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:50.268 [2024-07-14 21:22:01.720602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.695 ms 00:22:50.268 [2024-07-14 21:22:01.720613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.268 [2024-07-14 21:22:01.720653] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:50.268 [2024-07-14 21:22:01.720676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.720690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.720701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.720712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.720724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.720735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.720746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.720772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.720783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.720794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.720820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.720865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.720879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.720890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.720901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.720913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.720930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.720941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.720953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.720964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.720976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.720987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.720999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:50.268 [2024-07-14 21:22:01.721692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.721704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.721716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.721728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.721740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.721752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.721764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.721776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.721788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.721801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.721813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.721825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.721837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.721849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.721861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.721872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.721884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.721896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.721927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.721940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.721952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.721964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.721977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.721989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.722002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.722015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.722027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.722038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:50.269 [2024-07-14 21:22:01.722064] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:50.269 [2024-07-14 21:22:01.722076] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 183c2ef8-c7fe-444a-9b23-fb17f92b76cf 00:22:50.269 [2024-07-14 21:22:01.722088] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:50.269 [2024-07-14 21:22:01.722099] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:50.269 [2024-07-14 21:22:01.722117] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:50.269 [2024-07-14 21:22:01.722129] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:50.269 [2024-07-14 21:22:01.722140] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:50.269 [2024-07-14 21:22:01.722153] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:50.269 [2024-07-14 21:22:01.722164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:50.269 [2024-07-14 21:22:01.722174] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:50.269 [2024-07-14 21:22:01.722185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:50.269 [2024-07-14 21:22:01.722196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.269 [2024-07-14 21:22:01.722209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:50.269 [2024-07-14 21:22:01.722221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.544 ms 00:22:50.269 [2024-07-14 21:22:01.722232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.269 [2024-07-14 21:22:01.736687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.269 [2024-07-14 21:22:01.736725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:50.269 [2024-07-14 21:22:01.736752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.409 ms 00:22:50.269 [2024-07-14 21:22:01.736763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.269 [2024-07-14 21:22:01.737284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.269 [2024-07-14 21:22:01.737315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:50.269 [2024-07-14 21:22:01.737330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 00:22:50.269 [2024-07-14 21:22:01.737342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.269 [2024-07-14 21:22:01.768491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.269 [2024-07-14 21:22:01.768538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:50.269 [2024-07-14 21:22:01.768555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.269 [2024-07-14 21:22:01.768566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.269 [2024-07-14 21:22:01.768629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.269 [2024-07-14 21:22:01.768644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:50.269 [2024-07-14 21:22:01.768656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.269 [2024-07-14 21:22:01.768666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.269 [2024-07-14 21:22:01.768774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.269 [2024-07-14 21:22:01.768792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:50.269 [2024-07-14 21:22:01.768804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.269 [2024-07-14 21:22:01.768814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.269 [2024-07-14 21:22:01.768849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.269 [2024-07-14 21:22:01.768863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:50.269 [2024-07-14 21:22:01.768875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.269 [2024-07-14 21:22:01.768885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.529 [2024-07-14 21:22:01.859310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.529 [2024-07-14 21:22:01.859366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:50.529 [2024-07-14 21:22:01.859399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.529 [2024-07-14 21:22:01.859410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.529 [2024-07-14 21:22:01.932447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.529 [2024-07-14 21:22:01.932505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:50.529 [2024-07-14 21:22:01.932539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.529 [2024-07-14 21:22:01.932550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.529 [2024-07-14 21:22:01.932626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.529 [2024-07-14 21:22:01.932642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:50.529 [2024-07-14 21:22:01.932661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.529 [2024-07-14 21:22:01.932671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.529 [2024-07-14 21:22:01.932714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.529 [2024-07-14 21:22:01.932728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:50.529 [2024-07-14 21:22:01.932740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.529 [2024-07-14 21:22:01.932750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.529 [2024-07-14 21:22:01.932908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.529 [2024-07-14 21:22:01.932943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:50.529 [2024-07-14 21:22:01.932963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.529 [2024-07-14 21:22:01.932974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.529 [2024-07-14 21:22:01.933021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.529 [2024-07-14 21:22:01.933039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:50.529 [2024-07-14 21:22:01.933051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.529 [2024-07-14 21:22:01.933061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.529 [2024-07-14 21:22:01.933105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.529 [2024-07-14 21:22:01.933119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:50.529 [2024-07-14 21:22:01.933146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.529 [2024-07-14 21:22:01.933162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.529 [2024-07-14 21:22:01.933211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.529 [2024-07-14 21:22:01.933242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:50.529 [2024-07-14 21:22:01.933269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.529 [2024-07-14 21:22:01.933296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.529 [2024-07-14 21:22:01.933448] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 384.401 ms, result 0 00:22:51.469 00:22:51.469 00:22:51.469 21:22:02 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:53.994 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:53.994 21:22:04 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:53.994 [2024-07-14 21:22:05.027606] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:53.994 [2024-07-14 21:22:05.027756] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81986 ] 00:22:53.994 [2024-07-14 21:22:05.191364] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.994 [2024-07-14 21:22:05.395646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.252 [2024-07-14 21:22:05.685894] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:54.252 [2024-07-14 21:22:05.685977] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:54.512 [2024-07-14 21:22:05.845112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.512 [2024-07-14 21:22:05.845181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:54.512 [2024-07-14 21:22:05.845200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:54.512 [2024-07-14 21:22:05.845211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.512 [2024-07-14 21:22:05.845278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.512 [2024-07-14 21:22:05.845297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:54.512 [2024-07-14 21:22:05.845308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:54.512 [2024-07-14 21:22:05.845321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.512 [2024-07-14 21:22:05.845349] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:54.512 [2024-07-14 21:22:05.846269] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:54.512 [2024-07-14 21:22:05.846313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.512 [2024-07-14 21:22:05.846332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:54.512 [2024-07-14 21:22:05.846345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:22:54.512 [2024-07-14 21:22:05.846356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.512 [2024-07-14 21:22:05.847595] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:54.512 [2024-07-14 21:22:05.862695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.512 [2024-07-14 21:22:05.862734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:54.512 [2024-07-14 21:22:05.862782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.102 ms 00:22:54.512 [2024-07-14 21:22:05.862792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.512 [2024-07-14 21:22:05.862909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.512 [2024-07-14 21:22:05.862929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:54.512 [2024-07-14 21:22:05.862946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:22:54.512 [2024-07-14 21:22:05.862957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.512 [2024-07-14 21:22:05.867547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.512 [2024-07-14 21:22:05.867590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:54.512 [2024-07-14 21:22:05.867622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.485 ms 00:22:54.512 [2024-07-14 21:22:05.867633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.512 [2024-07-14 21:22:05.867722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.512 [2024-07-14 21:22:05.867744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:54.512 [2024-07-14 21:22:05.867755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:22:54.512 [2024-07-14 21:22:05.867781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.512 [2024-07-14 21:22:05.867920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.512 [2024-07-14 21:22:05.867940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:54.512 [2024-07-14 21:22:05.867969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:22:54.512 [2024-07-14 21:22:05.867980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.512 [2024-07-14 21:22:05.868016] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:54.512 [2024-07-14 21:22:05.872273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.512 [2024-07-14 21:22:05.872312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:54.512 [2024-07-14 21:22:05.872344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.267 ms 00:22:54.512 [2024-07-14 21:22:05.872355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.512 [2024-07-14 21:22:05.872462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.512 [2024-07-14 21:22:05.872480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:54.512 [2024-07-14 21:22:05.872493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:54.512 [2024-07-14 21:22:05.872519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.512 [2024-07-14 21:22:05.872561] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:54.512 [2024-07-14 21:22:05.872590] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:54.512 [2024-07-14 21:22:05.872632] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:54.512 [2024-07-14 21:22:05.872655] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:54.512 [2024-07-14 21:22:05.872784] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:54.512 [2024-07-14 21:22:05.872797] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:54.512 [2024-07-14 21:22:05.872824] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:54.512 [2024-07-14 21:22:05.872837] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:54.512 [2024-07-14 21:22:05.872848] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:54.512 [2024-07-14 21:22:05.872859] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:54.512 [2024-07-14 21:22:05.872869] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:54.512 [2024-07-14 21:22:05.872878] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:54.512 [2024-07-14 21:22:05.872929] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:54.512 [2024-07-14 21:22:05.872942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.512 [2024-07-14 21:22:05.872957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:54.512 [2024-07-14 21:22:05.872968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:22:54.512 [2024-07-14 21:22:05.872978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.512 [2024-07-14 21:22:05.873083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.512 [2024-07-14 21:22:05.873097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:54.512 [2024-07-14 21:22:05.873109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:54.512 [2024-07-14 21:22:05.873118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.512 [2024-07-14 21:22:05.873253] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:54.512 [2024-07-14 21:22:05.873302] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:54.512 [2024-07-14 21:22:05.873320] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:54.512 [2024-07-14 21:22:05.873346] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.512 [2024-07-14 21:22:05.873357] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:54.512 [2024-07-14 21:22:05.873367] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:54.512 [2024-07-14 21:22:05.873378] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:54.512 [2024-07-14 21:22:05.873388] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:54.512 [2024-07-14 21:22:05.873398] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:54.512 [2024-07-14 21:22:05.873408] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:54.512 [2024-07-14 21:22:05.873418] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:54.512 [2024-07-14 21:22:05.873428] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:54.512 [2024-07-14 21:22:05.873438] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:54.512 [2024-07-14 21:22:05.873449] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:54.512 [2024-07-14 21:22:05.873459] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:54.512 [2024-07-14 21:22:05.873469] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.512 [2024-07-14 21:22:05.873480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:54.512 [2024-07-14 21:22:05.873490] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:54.512 [2024-07-14 21:22:05.873500] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.512 [2024-07-14 21:22:05.873510] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:54.512 [2024-07-14 21:22:05.873532] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:54.512 [2024-07-14 21:22:05.873543] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:54.512 [2024-07-14 21:22:05.873553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:54.512 [2024-07-14 21:22:05.873563] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:54.512 [2024-07-14 21:22:05.873573] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:54.512 [2024-07-14 21:22:05.873582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:54.512 [2024-07-14 21:22:05.873592] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:54.512 [2024-07-14 21:22:05.873602] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:54.512 [2024-07-14 21:22:05.873612] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:54.512 [2024-07-14 21:22:05.873622] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:54.512 [2024-07-14 21:22:05.873632] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:54.513 [2024-07-14 21:22:05.873641] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:54.513 [2024-07-14 21:22:05.873651] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:54.513 [2024-07-14 21:22:05.873661] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:54.513 [2024-07-14 21:22:05.873671] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:54.513 [2024-07-14 21:22:05.873681] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:54.513 [2024-07-14 21:22:05.873691] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:54.513 [2024-07-14 21:22:05.873701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:54.513 [2024-07-14 21:22:05.873712] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:54.513 [2024-07-14 21:22:05.873722] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.513 [2024-07-14 21:22:05.873732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:54.513 [2024-07-14 21:22:05.873741] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:54.513 [2024-07-14 21:22:05.873751] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.513 [2024-07-14 21:22:05.873761] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:54.513 [2024-07-14 21:22:05.873772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:54.513 [2024-07-14 21:22:05.873783] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:54.513 [2024-07-14 21:22:05.873793] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.513 [2024-07-14 21:22:05.873804] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:54.513 [2024-07-14 21:22:05.873814] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:54.513 [2024-07-14 21:22:05.873824] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:54.513 [2024-07-14 21:22:05.873834] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:54.513 [2024-07-14 21:22:05.873844] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:54.513 [2024-07-14 21:22:05.873854] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:54.513 [2024-07-14 21:22:05.873865] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:54.513 [2024-07-14 21:22:05.873879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:54.513 [2024-07-14 21:22:05.873909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:54.513 [2024-07-14 21:22:05.873921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:54.513 [2024-07-14 21:22:05.873932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:54.513 [2024-07-14 21:22:05.873943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:54.513 [2024-07-14 21:22:05.873954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:54.513 [2024-07-14 21:22:05.873964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:54.513 [2024-07-14 21:22:05.873975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:54.513 [2024-07-14 21:22:05.873986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:54.513 [2024-07-14 21:22:05.873997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:54.513 [2024-07-14 21:22:05.874008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:54.513 [2024-07-14 21:22:05.874019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:54.513 [2024-07-14 21:22:05.874030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:54.513 [2024-07-14 21:22:05.874041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:54.513 [2024-07-14 21:22:05.874052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:54.513 [2024-07-14 21:22:05.874063] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:54.513 [2024-07-14 21:22:05.874074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:54.513 [2024-07-14 21:22:05.874086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:54.513 [2024-07-14 21:22:05.874097] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:54.513 [2024-07-14 21:22:05.874108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:54.513 [2024-07-14 21:22:05.874120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:54.513 [2024-07-14 21:22:05.874132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.513 [2024-07-14 21:22:05.874149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:54.513 [2024-07-14 21:22:05.874161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:22:54.513 [2024-07-14 21:22:05.874173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.513 [2024-07-14 21:22:05.915766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.513 [2024-07-14 21:22:05.915880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:54.513 [2024-07-14 21:22:05.915919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.533 ms 00:22:54.513 [2024-07-14 21:22:05.915931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.513 [2024-07-14 21:22:05.916047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.513 [2024-07-14 21:22:05.916063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:54.513 [2024-07-14 21:22:05.916075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:22:54.513 [2024-07-14 21:22:05.916085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.513 [2024-07-14 21:22:05.949718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.513 [2024-07-14 21:22:05.949767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:54.513 [2024-07-14 21:22:05.949801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.530 ms 00:22:54.513 [2024-07-14 21:22:05.949840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.513 [2024-07-14 21:22:05.949923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.513 [2024-07-14 21:22:05.949940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:54.513 [2024-07-14 21:22:05.949952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:54.513 [2024-07-14 21:22:05.949962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.513 [2024-07-14 21:22:05.950406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.513 [2024-07-14 21:22:05.950446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:54.513 [2024-07-14 21:22:05.950459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:22:54.513 [2024-07-14 21:22:05.950469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.513 [2024-07-14 21:22:05.950614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.513 [2024-07-14 21:22:05.950633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:54.513 [2024-07-14 21:22:05.950644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:22:54.513 [2024-07-14 21:22:05.950654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.513 [2024-07-14 21:22:05.965352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.513 [2024-07-14 21:22:05.965392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:54.513 [2024-07-14 21:22:05.965424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.674 ms 00:22:54.513 [2024-07-14 21:22:05.965434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.513 [2024-07-14 21:22:05.980849] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:54.513 [2024-07-14 21:22:05.981134] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:54.513 [2024-07-14 21:22:05.981181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.513 [2024-07-14 21:22:05.981195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:54.513 [2024-07-14 21:22:05.981208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.630 ms 00:22:54.513 [2024-07-14 21:22:05.981218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.513 [2024-07-14 21:22:06.007853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.513 [2024-07-14 21:22:06.007908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:54.513 [2024-07-14 21:22:06.007942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.587 ms 00:22:54.513 [2024-07-14 21:22:06.007960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.513 [2024-07-14 21:22:06.022008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.513 [2024-07-14 21:22:06.022047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:54.513 [2024-07-14 21:22:06.022080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.996 ms 00:22:54.513 [2024-07-14 21:22:06.022090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.513 [2024-07-14 21:22:06.035862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.513 [2024-07-14 21:22:06.035898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:54.513 [2024-07-14 21:22:06.035929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.730 ms 00:22:54.513 [2024-07-14 21:22:06.035939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.513 [2024-07-14 21:22:06.036689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.513 [2024-07-14 21:22:06.036711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:54.513 [2024-07-14 21:22:06.036739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.634 ms 00:22:54.513 [2024-07-14 21:22:06.036764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.771 [2024-07-14 21:22:06.102422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.771 [2024-07-14 21:22:06.102491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:54.771 [2024-07-14 21:22:06.102527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.635 ms 00:22:54.771 [2024-07-14 21:22:06.102538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.771 [2024-07-14 21:22:06.114019] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:54.771 [2024-07-14 21:22:06.116350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.771 [2024-07-14 21:22:06.116384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:54.771 [2024-07-14 21:22:06.116442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.730 ms 00:22:54.771 [2024-07-14 21:22:06.116454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.771 [2024-07-14 21:22:06.116554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.771 [2024-07-14 21:22:06.116573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:54.771 [2024-07-14 21:22:06.116587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:54.771 [2024-07-14 21:22:06.116598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.771 [2024-07-14 21:22:06.116685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.771 [2024-07-14 21:22:06.116709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:54.771 [2024-07-14 21:22:06.116722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:54.771 [2024-07-14 21:22:06.116748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.771 [2024-07-14 21:22:06.116778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.771 [2024-07-14 21:22:06.116790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:54.771 [2024-07-14 21:22:06.116802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:54.771 [2024-07-14 21:22:06.116826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.771 [2024-07-14 21:22:06.116881] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:54.771 [2024-07-14 21:22:06.116915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.771 [2024-07-14 21:22:06.116926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:54.771 [2024-07-14 21:22:06.116941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:54.771 [2024-07-14 21:22:06.116951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.772 [2024-07-14 21:22:06.146341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.772 [2024-07-14 21:22:06.146386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:54.772 [2024-07-14 21:22:06.146420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.364 ms 00:22:54.772 [2024-07-14 21:22:06.146431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.772 [2024-07-14 21:22:06.146508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.772 [2024-07-14 21:22:06.146535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:54.772 [2024-07-14 21:22:06.146547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:54.772 [2024-07-14 21:22:06.146557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.772 [2024-07-14 21:22:06.147952] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 302.241 ms, result 0 00:23:38.314  Copying: 23/1024 [MB] (23 MBps) Copying: 47/1024 [MB] (23 MBps) Copying: 71/1024 [MB] (24 MBps) Copying: 95/1024 [MB] (24 MBps) Copying: 119/1024 [MB] (23 MBps) Copying: 143/1024 [MB] (24 MBps) Copying: 168/1024 [MB] (24 MBps) Copying: 192/1024 [MB] (24 MBps) Copying: 216/1024 [MB] (24 MBps) Copying: 240/1024 [MB] (24 MBps) Copying: 265/1024 [MB] (24 MBps) Copying: 288/1024 [MB] (23 MBps) Copying: 312/1024 [MB] (24 MBps) Copying: 337/1024 [MB] (24 MBps) Copying: 361/1024 [MB] (24 MBps) Copying: 385/1024 [MB] (23 MBps) Copying: 410/1024 [MB] (24 MBps) Copying: 434/1024 [MB] (23 MBps) Copying: 458/1024 [MB] (24 MBps) Copying: 483/1024 [MB] (24 MBps) Copying: 507/1024 [MB] (24 MBps) Copying: 531/1024 [MB] (23 MBps) Copying: 555/1024 [MB] (24 MBps) Copying: 579/1024 [MB] (24 MBps) Copying: 603/1024 [MB] (24 MBps) Copying: 627/1024 [MB] (24 MBps) Copying: 652/1024 [MB] (24 MBps) Copying: 675/1024 [MB] (23 MBps) Copying: 700/1024 [MB] (24 MBps) Copying: 724/1024 [MB] (24 MBps) Copying: 748/1024 [MB] (23 MBps) Copying: 772/1024 [MB] (24 MBps) Copying: 796/1024 [MB] (23 MBps) Copying: 820/1024 [MB] (24 MBps) Copying: 844/1024 [MB] (24 MBps) Copying: 869/1024 [MB] (24 MBps) Copying: 893/1024 [MB] (23 MBps) Copying: 917/1024 [MB] (23 MBps) Copying: 940/1024 [MB] (23 MBps) Copying: 964/1024 [MB] (24 MBps) Copying: 987/1024 [MB] (23 MBps) Copying: 1011/1024 [MB] (23 MBps) Copying: 1023/1024 [MB] (12 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-14 21:22:49.791448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.314 [2024-07-14 21:22:49.791550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:38.314 [2024-07-14 21:22:49.791574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:38.314 [2024-07-14 21:22:49.791587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.314 [2024-07-14 21:22:49.794968] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:38.314 [2024-07-14 21:22:49.799371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.314 [2024-07-14 21:22:49.799411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:38.314 [2024-07-14 21:22:49.799443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.334 ms 00:23:38.314 [2024-07-14 21:22:49.799453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.314 [2024-07-14 21:22:49.812823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.314 [2024-07-14 21:22:49.812879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:38.314 [2024-07-14 21:22:49.812914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.961 ms 00:23:38.314 [2024-07-14 21:22:49.812926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.314 [2024-07-14 21:22:49.834914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.314 [2024-07-14 21:22:49.834957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:38.314 [2024-07-14 21:22:49.834996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.965 ms 00:23:38.314 [2024-07-14 21:22:49.835013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.314 [2024-07-14 21:22:49.841325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.314 [2024-07-14 21:22:49.841354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:38.314 [2024-07-14 21:22:49.841384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.274 ms 00:23:38.314 [2024-07-14 21:22:49.841394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.573 [2024-07-14 21:22:49.870768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.573 [2024-07-14 21:22:49.870854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:38.573 [2024-07-14 21:22:49.870872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.314 ms 00:23:38.573 [2024-07-14 21:22:49.870883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.573 [2024-07-14 21:22:49.886474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.573 [2024-07-14 21:22:49.886512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:38.573 [2024-07-14 21:22:49.886543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.532 ms 00:23:38.573 [2024-07-14 21:22:49.886559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.573 [2024-07-14 21:22:49.990804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.573 [2024-07-14 21:22:49.990889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:38.573 [2024-07-14 21:22:49.990907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.201 ms 00:23:38.573 [2024-07-14 21:22:49.990918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.573 [2024-07-14 21:22:50.021518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.573 [2024-07-14 21:22:50.021565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:38.573 [2024-07-14 21:22:50.021598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.578 ms 00:23:38.573 [2024-07-14 21:22:50.021609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.573 [2024-07-14 21:22:50.050462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.573 [2024-07-14 21:22:50.050510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:38.573 [2024-07-14 21:22:50.050542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.808 ms 00:23:38.573 [2024-07-14 21:22:50.050553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.573 [2024-07-14 21:22:50.080523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.573 [2024-07-14 21:22:50.080569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:38.573 [2024-07-14 21:22:50.080602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.897 ms 00:23:38.573 [2024-07-14 21:22:50.080614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.573 [2024-07-14 21:22:50.108671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.573 [2024-07-14 21:22:50.108712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:38.573 [2024-07-14 21:22:50.108759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.963 ms 00:23:38.573 [2024-07-14 21:22:50.108769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.573 [2024-07-14 21:22:50.108859] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:38.573 [2024-07-14 21:22:50.108896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 117504 / 261120 wr_cnt: 1 state: open 00:23:38.573 [2024-07-14 21:22:50.108909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:38.573 [2024-07-14 21:22:50.108920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:38.573 [2024-07-14 21:22:50.108931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:38.573 [2024-07-14 21:22:50.108942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:38.573 [2024-07-14 21:22:50.108952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:38.573 [2024-07-14 21:22:50.108962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.108972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.108983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.108994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.109999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.110010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.110022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.110033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.110045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.110056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.110068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:38.574 [2024-07-14 21:22:50.110088] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:38.574 [2024-07-14 21:22:50.110099] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 183c2ef8-c7fe-444a-9b23-fb17f92b76cf 00:23:38.574 [2024-07-14 21:22:50.110111] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 117504 00:23:38.574 [2024-07-14 21:22:50.110121] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 118464 00:23:38.575 [2024-07-14 21:22:50.110131] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 117504 00:23:38.575 [2024-07-14 21:22:50.110143] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0082 00:23:38.575 [2024-07-14 21:22:50.110154] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:38.575 [2024-07-14 21:22:50.110170] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:38.575 [2024-07-14 21:22:50.110181] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:38.575 [2024-07-14 21:22:50.110190] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:38.575 [2024-07-14 21:22:50.110200] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:38.575 [2024-07-14 21:22:50.110210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.575 [2024-07-14 21:22:50.110225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:38.575 [2024-07-14 21:22:50.110237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.354 ms 00:23:38.575 [2024-07-14 21:22:50.110247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.833 [2024-07-14 21:22:50.126888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.833 [2024-07-14 21:22:50.126926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:38.833 [2024-07-14 21:22:50.126973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.598 ms 00:23:38.833 [2024-07-14 21:22:50.126985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.833 [2024-07-14 21:22:50.127429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.833 [2024-07-14 21:22:50.127461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:38.833 [2024-07-14 21:22:50.127476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:23:38.834 [2024-07-14 21:22:50.127487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.834 [2024-07-14 21:22:50.160271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.834 [2024-07-14 21:22:50.160315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:38.834 [2024-07-14 21:22:50.160346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.834 [2024-07-14 21:22:50.160356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.834 [2024-07-14 21:22:50.160445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.834 [2024-07-14 21:22:50.160461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:38.834 [2024-07-14 21:22:50.160474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.834 [2024-07-14 21:22:50.160484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.834 [2024-07-14 21:22:50.160561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.834 [2024-07-14 21:22:50.160579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:38.834 [2024-07-14 21:22:50.160591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.834 [2024-07-14 21:22:50.160602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.834 [2024-07-14 21:22:50.160629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.834 [2024-07-14 21:22:50.160643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:38.834 [2024-07-14 21:22:50.160654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.834 [2024-07-14 21:22:50.160665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.834 [2024-07-14 21:22:50.246650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.834 [2024-07-14 21:22:50.246706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:38.834 [2024-07-14 21:22:50.246754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.834 [2024-07-14 21:22:50.246765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.834 [2024-07-14 21:22:50.321353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.834 [2024-07-14 21:22:50.321406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:38.834 [2024-07-14 21:22:50.321439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.834 [2024-07-14 21:22:50.321450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.834 [2024-07-14 21:22:50.321520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.834 [2024-07-14 21:22:50.321535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:38.834 [2024-07-14 21:22:50.321545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.834 [2024-07-14 21:22:50.321555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.834 [2024-07-14 21:22:50.321592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.834 [2024-07-14 21:22:50.321612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:38.834 [2024-07-14 21:22:50.321623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.834 [2024-07-14 21:22:50.321632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.834 [2024-07-14 21:22:50.321735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.834 [2024-07-14 21:22:50.321752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:38.834 [2024-07-14 21:22:50.321764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.834 [2024-07-14 21:22:50.321773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.834 [2024-07-14 21:22:50.321867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.834 [2024-07-14 21:22:50.321886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:38.834 [2024-07-14 21:22:50.321904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.834 [2024-07-14 21:22:50.321915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.834 [2024-07-14 21:22:50.321959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.834 [2024-07-14 21:22:50.321974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:38.834 [2024-07-14 21:22:50.321985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.834 [2024-07-14 21:22:50.321995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.834 [2024-07-14 21:22:50.322046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.834 [2024-07-14 21:22:50.322082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:38.834 [2024-07-14 21:22:50.322094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.834 [2024-07-14 21:22:50.322120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.834 [2024-07-14 21:22:50.322270] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 533.570 ms, result 0 00:23:40.211 00:23:40.211 00:23:40.211 21:22:51 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:40.211 [2024-07-14 21:22:51.636904] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:40.211 [2024-07-14 21:22:51.637059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82443 ] 00:23:40.470 [2024-07-14 21:22:51.804544] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.470 [2024-07-14 21:22:51.969553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.730 [2024-07-14 21:22:52.247098] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:40.730 [2024-07-14 21:22:52.247174] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:40.992 [2024-07-14 21:22:52.404974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.992 [2024-07-14 21:22:52.405024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:40.992 [2024-07-14 21:22:52.405058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:40.992 [2024-07-14 21:22:52.405068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.992 [2024-07-14 21:22:52.405135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.992 [2024-07-14 21:22:52.405155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:40.992 [2024-07-14 21:22:52.405167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:40.992 [2024-07-14 21:22:52.405185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.992 [2024-07-14 21:22:52.405214] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:40.992 [2024-07-14 21:22:52.406052] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:40.992 [2024-07-14 21:22:52.406087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.992 [2024-07-14 21:22:52.406103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:40.992 [2024-07-14 21:22:52.406115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.879 ms 00:23:40.992 [2024-07-14 21:22:52.406124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.992 [2024-07-14 21:22:52.407346] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:40.992 [2024-07-14 21:22:52.421254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.992 [2024-07-14 21:22:52.421294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:40.992 [2024-07-14 21:22:52.421326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.910 ms 00:23:40.992 [2024-07-14 21:22:52.421336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.992 [2024-07-14 21:22:52.421401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.992 [2024-07-14 21:22:52.421420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:40.992 [2024-07-14 21:22:52.421434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:40.992 [2024-07-14 21:22:52.421444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.992 [2024-07-14 21:22:52.425931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.992 [2024-07-14 21:22:52.425970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:40.992 [2024-07-14 21:22:52.426000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.406 ms 00:23:40.992 [2024-07-14 21:22:52.426009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.992 [2024-07-14 21:22:52.426093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.992 [2024-07-14 21:22:52.426113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:40.992 [2024-07-14 21:22:52.426124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:23:40.992 [2024-07-14 21:22:52.426133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.992 [2024-07-14 21:22:52.426186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.992 [2024-07-14 21:22:52.426203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:40.992 [2024-07-14 21:22:52.426214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:40.992 [2024-07-14 21:22:52.426223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.992 [2024-07-14 21:22:52.426253] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:40.992 [2024-07-14 21:22:52.430199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.992 [2024-07-14 21:22:52.430246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:40.992 [2024-07-14 21:22:52.430276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.954 ms 00:23:40.992 [2024-07-14 21:22:52.430286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.992 [2024-07-14 21:22:52.430326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.992 [2024-07-14 21:22:52.430341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:40.992 [2024-07-14 21:22:52.430351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:40.992 [2024-07-14 21:22:52.430361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.992 [2024-07-14 21:22:52.430398] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:40.992 [2024-07-14 21:22:52.430426] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:40.992 [2024-07-14 21:22:52.430461] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:40.992 [2024-07-14 21:22:52.430480] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:40.992 [2024-07-14 21:22:52.430567] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:40.992 [2024-07-14 21:22:52.430581] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:40.992 [2024-07-14 21:22:52.430594] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:40.992 [2024-07-14 21:22:52.430606] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:40.992 [2024-07-14 21:22:52.430617] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:40.992 [2024-07-14 21:22:52.430627] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:40.992 [2024-07-14 21:22:52.430636] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:40.992 [2024-07-14 21:22:52.430645] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:40.992 [2024-07-14 21:22:52.430654] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:40.992 [2024-07-14 21:22:52.430664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.992 [2024-07-14 21:22:52.430678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:40.992 [2024-07-14 21:22:52.430688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:23:40.992 [2024-07-14 21:22:52.430697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.992 [2024-07-14 21:22:52.430771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.992 [2024-07-14 21:22:52.430784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:40.992 [2024-07-14 21:22:52.430794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:40.992 [2024-07-14 21:22:52.430804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.992 [2024-07-14 21:22:52.430910] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:40.992 [2024-07-14 21:22:52.430928] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:40.992 [2024-07-14 21:22:52.430943] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:40.992 [2024-07-14 21:22:52.430953] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.992 [2024-07-14 21:22:52.430963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:40.992 [2024-07-14 21:22:52.430972] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:40.992 [2024-07-14 21:22:52.430981] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:40.992 [2024-07-14 21:22:52.430991] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:40.992 [2024-07-14 21:22:52.431000] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:40.992 [2024-07-14 21:22:52.431008] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:40.992 [2024-07-14 21:22:52.431017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:40.992 [2024-07-14 21:22:52.431025] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:40.992 [2024-07-14 21:22:52.431036] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:40.992 [2024-07-14 21:22:52.431045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:40.992 [2024-07-14 21:22:52.431054] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:40.992 [2024-07-14 21:22:52.431063] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.992 [2024-07-14 21:22:52.431071] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:40.992 [2024-07-14 21:22:52.431080] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:40.992 [2024-07-14 21:22:52.431088] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.992 [2024-07-14 21:22:52.431096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:40.992 [2024-07-14 21:22:52.431116] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:40.992 [2024-07-14 21:22:52.431126] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.992 [2024-07-14 21:22:52.431134] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:40.992 [2024-07-14 21:22:52.431142] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:40.992 [2024-07-14 21:22:52.431150] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.992 [2024-07-14 21:22:52.431159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:40.992 [2024-07-14 21:22:52.431167] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:40.992 [2024-07-14 21:22:52.431176] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.992 [2024-07-14 21:22:52.431184] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:40.992 [2024-07-14 21:22:52.431192] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:40.992 [2024-07-14 21:22:52.431201] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.992 [2024-07-14 21:22:52.431210] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:40.992 [2024-07-14 21:22:52.431218] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:40.992 [2024-07-14 21:22:52.431226] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:40.992 [2024-07-14 21:22:52.431235] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:40.992 [2024-07-14 21:22:52.431243] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:40.992 [2024-07-14 21:22:52.431252] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:40.992 [2024-07-14 21:22:52.431260] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:40.992 [2024-07-14 21:22:52.431269] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:40.992 [2024-07-14 21:22:52.431277] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.992 [2024-07-14 21:22:52.431285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:40.992 [2024-07-14 21:22:52.431293] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:40.992 [2024-07-14 21:22:52.431303] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.992 [2024-07-14 21:22:52.431311] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:40.992 [2024-07-14 21:22:52.431321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:40.992 [2024-07-14 21:22:52.431330] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:40.992 [2024-07-14 21:22:52.431339] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.993 [2024-07-14 21:22:52.431349] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:40.993 [2024-07-14 21:22:52.431357] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:40.993 [2024-07-14 21:22:52.431366] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:40.993 [2024-07-14 21:22:52.431374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:40.993 [2024-07-14 21:22:52.431382] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:40.993 [2024-07-14 21:22:52.431391] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:40.993 [2024-07-14 21:22:52.431401] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:40.993 [2024-07-14 21:22:52.431422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:40.993 [2024-07-14 21:22:52.431432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:40.993 [2024-07-14 21:22:52.431442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:40.993 [2024-07-14 21:22:52.431451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:40.993 [2024-07-14 21:22:52.431461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:40.993 [2024-07-14 21:22:52.431470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:40.993 [2024-07-14 21:22:52.431479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:40.993 [2024-07-14 21:22:52.431488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:40.993 [2024-07-14 21:22:52.431498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:40.993 [2024-07-14 21:22:52.431507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:40.993 [2024-07-14 21:22:52.431516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:40.993 [2024-07-14 21:22:52.431526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:40.993 [2024-07-14 21:22:52.431535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:40.993 [2024-07-14 21:22:52.431544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:40.993 [2024-07-14 21:22:52.431554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:40.993 [2024-07-14 21:22:52.431563] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:40.993 [2024-07-14 21:22:52.431573] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:40.993 [2024-07-14 21:22:52.431584] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:40.993 [2024-07-14 21:22:52.431594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:40.993 [2024-07-14 21:22:52.431603] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:40.993 [2024-07-14 21:22:52.431613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:40.993 [2024-07-14 21:22:52.431623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.993 [2024-07-14 21:22:52.431638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:40.993 [2024-07-14 21:22:52.431648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:23:40.993 [2024-07-14 21:22:52.431657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.993 [2024-07-14 21:22:52.468588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.993 [2024-07-14 21:22:52.468930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:40.993 [2024-07-14 21:22:52.469063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.855 ms 00:23:40.993 [2024-07-14 21:22:52.469114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.993 [2024-07-14 21:22:52.469257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.993 [2024-07-14 21:22:52.469319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:40.993 [2024-07-14 21:22:52.469412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:40.993 [2024-07-14 21:22:52.469457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.993 [2024-07-14 21:22:52.502063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.993 [2024-07-14 21:22:52.502303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:40.993 [2024-07-14 21:22:52.502460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.488 ms 00:23:40.993 [2024-07-14 21:22:52.502510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.993 [2024-07-14 21:22:52.502590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.993 [2024-07-14 21:22:52.502698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:40.993 [2024-07-14 21:22:52.502749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:40.993 [2024-07-14 21:22:52.502784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.993 [2024-07-14 21:22:52.503194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.993 [2024-07-14 21:22:52.503325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:40.993 [2024-07-14 21:22:52.503427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:23:40.993 [2024-07-14 21:22:52.503550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.993 [2024-07-14 21:22:52.503711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.993 [2024-07-14 21:22:52.503748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:40.993 [2024-07-14 21:22:52.503761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:23:40.993 [2024-07-14 21:22:52.503772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.993 [2024-07-14 21:22:52.517954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.993 [2024-07-14 21:22:52.517990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:40.993 [2024-07-14 21:22:52.518021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.153 ms 00:23:40.993 [2024-07-14 21:22:52.518030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.267 [2024-07-14 21:22:52.534371] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:41.267 [2024-07-14 21:22:52.534417] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:41.267 [2024-07-14 21:22:52.534450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.267 [2024-07-14 21:22:52.534461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:41.267 [2024-07-14 21:22:52.534473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.307 ms 00:23:41.267 [2024-07-14 21:22:52.534483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.267 [2024-07-14 21:22:52.566879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.267 [2024-07-14 21:22:52.566937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:41.267 [2024-07-14 21:22:52.566956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.333 ms 00:23:41.267 [2024-07-14 21:22:52.566976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.267 [2024-07-14 21:22:52.582996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.267 [2024-07-14 21:22:52.583040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:41.267 [2024-07-14 21:22:52.583057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.964 ms 00:23:41.267 [2024-07-14 21:22:52.583069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.267 [2024-07-14 21:22:52.597991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.267 [2024-07-14 21:22:52.598044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:41.267 [2024-07-14 21:22:52.598075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.861 ms 00:23:41.267 [2024-07-14 21:22:52.598086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.267 [2024-07-14 21:22:52.598856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.267 [2024-07-14 21:22:52.598909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:41.267 [2024-07-14 21:22:52.598926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:23:41.267 [2024-07-14 21:22:52.598938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.267 [2024-07-14 21:22:52.667716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.267 [2024-07-14 21:22:52.667769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:41.267 [2024-07-14 21:22:52.667803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.751 ms 00:23:41.267 [2024-07-14 21:22:52.667877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.267 [2024-07-14 21:22:52.679974] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:41.267 [2024-07-14 21:22:52.682452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.267 [2024-07-14 21:22:52.682484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:41.267 [2024-07-14 21:22:52.682514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.500 ms 00:23:41.267 [2024-07-14 21:22:52.682523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.267 [2024-07-14 21:22:52.682619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.267 [2024-07-14 21:22:52.682637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:41.267 [2024-07-14 21:22:52.682649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:41.267 [2024-07-14 21:22:52.682658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.267 [2024-07-14 21:22:52.684231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.267 [2024-07-14 21:22:52.684269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:41.267 [2024-07-14 21:22:52.684313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.524 ms 00:23:41.267 [2024-07-14 21:22:52.684323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.267 [2024-07-14 21:22:52.684350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.267 [2024-07-14 21:22:52.684365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:41.267 [2024-07-14 21:22:52.684376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:41.267 [2024-07-14 21:22:52.684386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.267 [2024-07-14 21:22:52.684466] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:41.267 [2024-07-14 21:22:52.684485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.267 [2024-07-14 21:22:52.684496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:41.267 [2024-07-14 21:22:52.684513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:41.267 [2024-07-14 21:22:52.684524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.268 [2024-07-14 21:22:52.711961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.268 [2024-07-14 21:22:52.712000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:41.268 [2024-07-14 21:22:52.712032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.408 ms 00:23:41.268 [2024-07-14 21:22:52.712042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.268 [2024-07-14 21:22:52.712116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.268 [2024-07-14 21:22:52.712142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:41.268 [2024-07-14 21:22:52.712153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:41.268 [2024-07-14 21:22:52.712163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.268 [2024-07-14 21:22:52.720516] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 312.195 ms, result 0 00:24:25.198  Copying: 21/1024 [MB] (21 MBps) Copying: 45/1024 [MB] (23 MBps) Copying: 67/1024 [MB] (22 MBps) Copying: 91/1024 [MB] (23 MBps) Copying: 115/1024 [MB] (24 MBps) Copying: 139/1024 [MB] (24 MBps) Copying: 163/1024 [MB] (23 MBps) Copying: 186/1024 [MB] (23 MBps) Copying: 209/1024 [MB] (22 MBps) Copying: 233/1024 [MB] (23 MBps) Copying: 256/1024 [MB] (23 MBps) Copying: 278/1024 [MB] (22 MBps) Copying: 302/1024 [MB] (23 MBps) Copying: 326/1024 [MB] (23 MBps) Copying: 349/1024 [MB] (23 MBps) Copying: 373/1024 [MB] (23 MBps) Copying: 397/1024 [MB] (23 MBps) Copying: 421/1024 [MB] (24 MBps) Copying: 444/1024 [MB] (23 MBps) Copying: 468/1024 [MB] (23 MBps) Copying: 492/1024 [MB] (23 MBps) Copying: 516/1024 [MB] (23 MBps) Copying: 540/1024 [MB] (24 MBps) Copying: 564/1024 [MB] (24 MBps) Copying: 588/1024 [MB] (23 MBps) Copying: 612/1024 [MB] (23 MBps) Copying: 635/1024 [MB] (23 MBps) Copying: 658/1024 [MB] (23 MBps) Copying: 682/1024 [MB] (23 MBps) Copying: 705/1024 [MB] (23 MBps) Copying: 729/1024 [MB] (23 MBps) Copying: 753/1024 [MB] (23 MBps) Copying: 776/1024 [MB] (23 MBps) Copying: 800/1024 [MB] (23 MBps) Copying: 823/1024 [MB] (23 MBps) Copying: 847/1024 [MB] (23 MBps) Copying: 870/1024 [MB] (23 MBps) Copying: 893/1024 [MB] (23 MBps) Copying: 916/1024 [MB] (23 MBps) Copying: 940/1024 [MB] (23 MBps) Copying: 963/1024 [MB] (23 MBps) Copying: 986/1024 [MB] (23 MBps) Copying: 1010/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-14 21:23:36.673004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.198 [2024-07-14 21:23:36.673119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:25.198 [2024-07-14 21:23:36.673156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:25.198 [2024-07-14 21:23:36.673172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.198 [2024-07-14 21:23:36.673218] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:25.198 [2024-07-14 21:23:36.678006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.198 [2024-07-14 21:23:36.678053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:25.198 [2024-07-14 21:23:36.678073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.759 ms 00:24:25.198 [2024-07-14 21:23:36.678088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.198 [2024-07-14 21:23:36.678388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.198 [2024-07-14 21:23:36.678411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:25.198 [2024-07-14 21:23:36.678427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:24:25.198 [2024-07-14 21:23:36.678441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.198 [2024-07-14 21:23:36.684309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.198 [2024-07-14 21:23:36.684370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:25.198 [2024-07-14 21:23:36.684398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.842 ms 00:24:25.198 [2024-07-14 21:23:36.684432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.198 [2024-07-14 21:23:36.690817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.198 [2024-07-14 21:23:36.691012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:25.198 [2024-07-14 21:23:36.691126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.322 ms 00:24:25.198 [2024-07-14 21:23:36.691190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.198 [2024-07-14 21:23:36.721339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.198 [2024-07-14 21:23:36.721526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:25.198 [2024-07-14 21:23:36.721552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.072 ms 00:24:25.198 [2024-07-14 21:23:36.721565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.198 [2024-07-14 21:23:36.739185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.198 [2024-07-14 21:23:36.739227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:25.198 [2024-07-14 21:23:36.739259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.561 ms 00:24:25.198 [2024-07-14 21:23:36.739290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.458 [2024-07-14 21:23:36.855334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.458 [2024-07-14 21:23:36.855398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:25.458 [2024-07-14 21:23:36.855417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 115.999 ms 00:24:25.458 [2024-07-14 21:23:36.855427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.458 [2024-07-14 21:23:36.883214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.458 [2024-07-14 21:23:36.883268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:25.458 [2024-07-14 21:23:36.883315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.766 ms 00:24:25.458 [2024-07-14 21:23:36.883326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.458 [2024-07-14 21:23:36.912408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.458 [2024-07-14 21:23:36.912469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:25.458 [2024-07-14 21:23:36.912486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.024 ms 00:24:25.458 [2024-07-14 21:23:36.912497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.458 [2024-07-14 21:23:36.939413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.458 [2024-07-14 21:23:36.939451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:25.458 [2024-07-14 21:23:36.939482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.874 ms 00:24:25.458 [2024-07-14 21:23:36.939520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.458 [2024-07-14 21:23:36.967413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.458 [2024-07-14 21:23:36.967447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:25.458 [2024-07-14 21:23:36.967477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.812 ms 00:24:25.458 [2024-07-14 21:23:36.967486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.458 [2024-07-14 21:23:36.967523] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:25.458 [2024-07-14 21:23:36.967545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:24:25.458 [2024-07-14 21:23:36.967557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.967991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:25.458 [2024-07-14 21:23:36.968001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:25.459 [2024-07-14 21:23:36.968726] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:25.459 [2024-07-14 21:23:36.968737] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 183c2ef8-c7fe-444a-9b23-fb17f92b76cf 00:24:25.459 [2024-07-14 21:23:36.968748] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:24:25.459 [2024-07-14 21:23:36.968758] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 17088 00:24:25.459 [2024-07-14 21:23:36.968768] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 16128 00:24:25.459 [2024-07-14 21:23:36.968779] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0595 00:24:25.459 [2024-07-14 21:23:36.968789] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:25.459 [2024-07-14 21:23:36.968806] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:25.459 [2024-07-14 21:23:36.968816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:25.459 [2024-07-14 21:23:36.968850] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:25.459 [2024-07-14 21:23:36.968862] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:25.459 [2024-07-14 21:23:36.968873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.459 [2024-07-14 21:23:36.968883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:25.459 [2024-07-14 21:23:36.968898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.351 ms 00:24:25.459 [2024-07-14 21:23:36.968908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.459 [2024-07-14 21:23:36.983616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.459 [2024-07-14 21:23:36.983650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:25.459 [2024-07-14 21:23:36.983664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.671 ms 00:24:25.459 [2024-07-14 21:23:36.983685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.459 [2024-07-14 21:23:36.984180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.459 [2024-07-14 21:23:36.984220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:25.459 [2024-07-14 21:23:36.984232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:24:25.459 [2024-07-14 21:23:36.984242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.718 [2024-07-14 21:23:37.020396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.718 [2024-07-14 21:23:37.020479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:25.718 [2024-07-14 21:23:37.020497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.718 [2024-07-14 21:23:37.020508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.718 [2024-07-14 21:23:37.020580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.719 [2024-07-14 21:23:37.020596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:25.719 [2024-07-14 21:23:37.020609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.719 [2024-07-14 21:23:37.020619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.719 [2024-07-14 21:23:37.020694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.719 [2024-07-14 21:23:37.020713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:25.719 [2024-07-14 21:23:37.020725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.719 [2024-07-14 21:23:37.020737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.719 [2024-07-14 21:23:37.020779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.719 [2024-07-14 21:23:37.020794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:25.719 [2024-07-14 21:23:37.020837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.719 [2024-07-14 21:23:37.020902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.719 [2024-07-14 21:23:37.107619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.719 [2024-07-14 21:23:37.107681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:25.719 [2024-07-14 21:23:37.107713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.719 [2024-07-14 21:23:37.107723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.719 [2024-07-14 21:23:37.180741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.719 [2024-07-14 21:23:37.180885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:25.719 [2024-07-14 21:23:37.180920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.719 [2024-07-14 21:23:37.180932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.719 [2024-07-14 21:23:37.181016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.719 [2024-07-14 21:23:37.181033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:25.719 [2024-07-14 21:23:37.181045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.719 [2024-07-14 21:23:37.181068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.719 [2024-07-14 21:23:37.181111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.719 [2024-07-14 21:23:37.181126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:25.719 [2024-07-14 21:23:37.181143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.719 [2024-07-14 21:23:37.181154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.719 [2024-07-14 21:23:37.181335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.719 [2024-07-14 21:23:37.181355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:25.719 [2024-07-14 21:23:37.181367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.719 [2024-07-14 21:23:37.181378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.719 [2024-07-14 21:23:37.181424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.719 [2024-07-14 21:23:37.181442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:25.719 [2024-07-14 21:23:37.181459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.719 [2024-07-14 21:23:37.181470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.719 [2024-07-14 21:23:37.181514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.719 [2024-07-14 21:23:37.181529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:25.719 [2024-07-14 21:23:37.181540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.719 [2024-07-14 21:23:37.181551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.719 [2024-07-14 21:23:37.181602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.719 [2024-07-14 21:23:37.181619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:25.719 [2024-07-14 21:23:37.181635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.719 [2024-07-14 21:23:37.181646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.719 [2024-07-14 21:23:37.181823] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 508.777 ms, result 0 00:24:26.677 00:24:26.677 00:24:26.677 21:23:38 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:28.583 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:28.583 21:23:40 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:28.583 21:23:40 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:28.583 21:23:40 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:28.842 21:23:40 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:28.842 21:23:40 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:28.842 Process with pid 80850 is not found 00:24:28.842 Remove shared memory files 00:24:28.842 21:23:40 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 80850 00:24:28.842 21:23:40 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 80850 ']' 00:24:28.842 21:23:40 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 80850 00:24:28.842 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (80850) - No such process 00:24:28.842 21:23:40 ftl.ftl_restore -- common/autotest_common.sh@975 -- # echo 'Process with pid 80850 is not found' 00:24:28.842 21:23:40 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:28.842 21:23:40 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:28.842 21:23:40 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:28.842 21:23:40 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:28.842 21:23:40 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:28.842 21:23:40 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:28.842 21:23:40 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:28.842 00:24:28.842 real 3m25.576s 00:24:28.842 user 3m11.863s 00:24:28.842 sys 0m15.356s 00:24:28.842 21:23:40 ftl.ftl_restore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:28.842 21:23:40 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:28.842 ************************************ 00:24:28.842 END TEST ftl_restore 00:24:28.842 ************************************ 00:24:28.842 21:23:40 ftl -- common/autotest_common.sh@1142 -- # return 0 00:24:28.842 21:23:40 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:28.842 21:23:40 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:24:28.842 21:23:40 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:28.842 21:23:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:28.842 ************************************ 00:24:28.842 START TEST ftl_dirty_shutdown 00:24:28.842 ************************************ 00:24:28.842 21:23:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:29.101 * Looking for test storage... 00:24:29.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:29.101 21:23:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:29.101 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:29.101 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:29.101 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:29.101 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:29.101 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:29.101 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:29.101 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:29.101 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:29.101 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:29.101 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:29.101 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:29.101 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=82982 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 82982 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@829 -- # '[' -z 82982 ']' 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:29.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:29.102 21:23:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:29.102 [2024-07-14 21:23:40.593833] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:29.102 [2024-07-14 21:23:40.594100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82982 ] 00:24:29.360 [2024-07-14 21:23:40.771993] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.619 [2024-07-14 21:23:40.924854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.186 21:23:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:30.186 21:23:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # return 0 00:24:30.186 21:23:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:30.186 21:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:30.186 21:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:30.186 21:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:30.186 21:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:30.186 21:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:30.445 21:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:30.445 21:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:30.445 21:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:30.445 21:23:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:24:30.445 21:23:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:30.445 21:23:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:24:30.445 21:23:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:24:30.445 21:23:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:30.704 21:23:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:30.704 { 00:24:30.704 "name": "nvme0n1", 00:24:30.704 "aliases": [ 00:24:30.704 "73c558ba-d550-4729-9b1a-3669e0a19a7f" 00:24:30.704 ], 00:24:30.704 "product_name": "NVMe disk", 00:24:30.704 "block_size": 4096, 00:24:30.704 "num_blocks": 1310720, 00:24:30.704 "uuid": "73c558ba-d550-4729-9b1a-3669e0a19a7f", 00:24:30.704 "assigned_rate_limits": { 00:24:30.704 "rw_ios_per_sec": 0, 00:24:30.704 "rw_mbytes_per_sec": 0, 00:24:30.704 "r_mbytes_per_sec": 0, 00:24:30.704 "w_mbytes_per_sec": 0 00:24:30.704 }, 00:24:30.704 "claimed": true, 00:24:30.704 "claim_type": "read_many_write_one", 00:24:30.704 "zoned": false, 00:24:30.704 "supported_io_types": { 00:24:30.704 "read": true, 00:24:30.704 "write": true, 00:24:30.704 "unmap": true, 00:24:30.704 "flush": true, 00:24:30.704 "reset": true, 00:24:30.704 "nvme_admin": true, 00:24:30.704 "nvme_io": true, 00:24:30.704 "nvme_io_md": false, 00:24:30.704 "write_zeroes": true, 00:24:30.704 "zcopy": false, 00:24:30.704 "get_zone_info": false, 00:24:30.704 "zone_management": false, 00:24:30.704 "zone_append": false, 00:24:30.704 "compare": true, 00:24:30.704 "compare_and_write": false, 00:24:30.704 "abort": true, 00:24:30.704 "seek_hole": false, 00:24:30.704 "seek_data": false, 00:24:30.704 "copy": true, 00:24:30.704 "nvme_iov_md": false 00:24:30.704 }, 00:24:30.704 "driver_specific": { 00:24:30.704 "nvme": [ 00:24:30.704 { 00:24:30.704 "pci_address": "0000:00:11.0", 00:24:30.704 "trid": { 00:24:30.704 "trtype": "PCIe", 00:24:30.704 "traddr": "0000:00:11.0" 00:24:30.704 }, 00:24:30.704 "ctrlr_data": { 00:24:30.704 "cntlid": 0, 00:24:30.704 "vendor_id": "0x1b36", 00:24:30.704 "model_number": "QEMU NVMe Ctrl", 00:24:30.704 "serial_number": "12341", 00:24:30.704 "firmware_revision": "8.0.0", 00:24:30.704 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:30.704 "oacs": { 00:24:30.704 "security": 0, 00:24:30.704 "format": 1, 00:24:30.704 "firmware": 0, 00:24:30.704 "ns_manage": 1 00:24:30.704 }, 00:24:30.704 "multi_ctrlr": false, 00:24:30.704 "ana_reporting": false 00:24:30.704 }, 00:24:30.704 "vs": { 00:24:30.704 "nvme_version": "1.4" 00:24:30.704 }, 00:24:30.704 "ns_data": { 00:24:30.704 "id": 1, 00:24:30.704 "can_share": false 00:24:30.704 } 00:24:30.704 } 00:24:30.704 ], 00:24:30.704 "mp_policy": "active_passive" 00:24:30.704 } 00:24:30.704 } 00:24:30.704 ]' 00:24:30.704 21:23:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:30.704 21:23:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:24:30.704 21:23:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:30.704 21:23:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:24:30.704 21:23:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:24:30.704 21:23:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:24:30.704 21:23:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:30.704 21:23:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:30.704 21:23:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:30.704 21:23:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:30.704 21:23:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:30.963 21:23:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=50a4b650-b76a-4a84-9c68-d118af8ef27d 00:24:30.963 21:23:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:30.963 21:23:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 50a4b650-b76a-4a84-9c68-d118af8ef27d 00:24:31.221 21:23:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:31.479 21:23:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=ff2fbde4-1680-4c93-ab20-762a4c4be704 00:24:31.479 21:23:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ff2fbde4-1680-4c93-ab20-762a4c4be704 00:24:31.737 21:23:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=73affa1c-3776-4d2a-be6e-ad3ed2b43b5c 00:24:31.737 21:23:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:31.737 21:23:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 73affa1c-3776-4d2a-be6e-ad3ed2b43b5c 00:24:31.737 21:23:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:31.737 21:23:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:31.737 21:23:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=73affa1c-3776-4d2a-be6e-ad3ed2b43b5c 00:24:31.737 21:23:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:31.737 21:23:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 73affa1c-3776-4d2a-be6e-ad3ed2b43b5c 00:24:31.737 21:23:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=73affa1c-3776-4d2a-be6e-ad3ed2b43b5c 00:24:31.737 21:23:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:31.737 21:23:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:24:31.737 21:23:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:24:31.737 21:23:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 73affa1c-3776-4d2a-be6e-ad3ed2b43b5c 00:24:31.994 21:23:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:31.994 { 00:24:31.994 "name": "73affa1c-3776-4d2a-be6e-ad3ed2b43b5c", 00:24:31.994 "aliases": [ 00:24:31.994 "lvs/nvme0n1p0" 00:24:31.994 ], 00:24:31.994 "product_name": "Logical Volume", 00:24:31.994 "block_size": 4096, 00:24:31.994 "num_blocks": 26476544, 00:24:31.994 "uuid": "73affa1c-3776-4d2a-be6e-ad3ed2b43b5c", 00:24:31.994 "assigned_rate_limits": { 00:24:31.994 "rw_ios_per_sec": 0, 00:24:31.994 "rw_mbytes_per_sec": 0, 00:24:31.994 "r_mbytes_per_sec": 0, 00:24:31.994 "w_mbytes_per_sec": 0 00:24:31.994 }, 00:24:31.994 "claimed": false, 00:24:31.994 "zoned": false, 00:24:31.994 "supported_io_types": { 00:24:31.994 "read": true, 00:24:31.994 "write": true, 00:24:31.994 "unmap": true, 00:24:31.994 "flush": false, 00:24:31.994 "reset": true, 00:24:31.994 "nvme_admin": false, 00:24:31.994 "nvme_io": false, 00:24:31.994 "nvme_io_md": false, 00:24:31.994 "write_zeroes": true, 00:24:31.994 "zcopy": false, 00:24:31.994 "get_zone_info": false, 00:24:31.994 "zone_management": false, 00:24:31.994 "zone_append": false, 00:24:31.994 "compare": false, 00:24:31.994 "compare_and_write": false, 00:24:31.994 "abort": false, 00:24:31.994 "seek_hole": true, 00:24:31.994 "seek_data": true, 00:24:31.994 "copy": false, 00:24:31.994 "nvme_iov_md": false 00:24:31.994 }, 00:24:31.994 "driver_specific": { 00:24:31.994 "lvol": { 00:24:31.994 "lvol_store_uuid": "ff2fbde4-1680-4c93-ab20-762a4c4be704", 00:24:31.994 "base_bdev": "nvme0n1", 00:24:31.994 "thin_provision": true, 00:24:31.994 "num_allocated_clusters": 0, 00:24:31.994 "snapshot": false, 00:24:31.994 "clone": false, 00:24:31.994 "esnap_clone": false 00:24:31.994 } 00:24:31.994 } 00:24:31.994 } 00:24:31.994 ]' 00:24:31.994 21:23:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:31.994 21:23:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:24:31.994 21:23:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:31.994 21:23:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:24:31.994 21:23:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:24:31.994 21:23:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:24:31.994 21:23:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:31.994 21:23:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:31.994 21:23:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:32.251 21:23:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:32.251 21:23:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:32.251 21:23:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 73affa1c-3776-4d2a-be6e-ad3ed2b43b5c 00:24:32.251 21:23:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=73affa1c-3776-4d2a-be6e-ad3ed2b43b5c 00:24:32.251 21:23:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:32.251 21:23:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:24:32.251 21:23:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:24:32.251 21:23:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 73affa1c-3776-4d2a-be6e-ad3ed2b43b5c 00:24:32.508 21:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:32.508 { 00:24:32.508 "name": "73affa1c-3776-4d2a-be6e-ad3ed2b43b5c", 00:24:32.508 "aliases": [ 00:24:32.508 "lvs/nvme0n1p0" 00:24:32.508 ], 00:24:32.508 "product_name": "Logical Volume", 00:24:32.508 "block_size": 4096, 00:24:32.508 "num_blocks": 26476544, 00:24:32.508 "uuid": "73affa1c-3776-4d2a-be6e-ad3ed2b43b5c", 00:24:32.508 "assigned_rate_limits": { 00:24:32.508 "rw_ios_per_sec": 0, 00:24:32.508 "rw_mbytes_per_sec": 0, 00:24:32.508 "r_mbytes_per_sec": 0, 00:24:32.508 "w_mbytes_per_sec": 0 00:24:32.508 }, 00:24:32.508 "claimed": false, 00:24:32.508 "zoned": false, 00:24:32.508 "supported_io_types": { 00:24:32.508 "read": true, 00:24:32.508 "write": true, 00:24:32.508 "unmap": true, 00:24:32.508 "flush": false, 00:24:32.508 "reset": true, 00:24:32.508 "nvme_admin": false, 00:24:32.508 "nvme_io": false, 00:24:32.508 "nvme_io_md": false, 00:24:32.508 "write_zeroes": true, 00:24:32.508 "zcopy": false, 00:24:32.508 "get_zone_info": false, 00:24:32.508 "zone_management": false, 00:24:32.508 "zone_append": false, 00:24:32.508 "compare": false, 00:24:32.508 "compare_and_write": false, 00:24:32.508 "abort": false, 00:24:32.508 "seek_hole": true, 00:24:32.508 "seek_data": true, 00:24:32.508 "copy": false, 00:24:32.508 "nvme_iov_md": false 00:24:32.508 }, 00:24:32.508 "driver_specific": { 00:24:32.508 "lvol": { 00:24:32.508 "lvol_store_uuid": "ff2fbde4-1680-4c93-ab20-762a4c4be704", 00:24:32.508 "base_bdev": "nvme0n1", 00:24:32.508 "thin_provision": true, 00:24:32.508 "num_allocated_clusters": 0, 00:24:32.508 "snapshot": false, 00:24:32.508 "clone": false, 00:24:32.508 "esnap_clone": false 00:24:32.508 } 00:24:32.508 } 00:24:32.508 } 00:24:32.508 ]' 00:24:32.508 21:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:32.766 21:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:24:32.766 21:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:32.766 21:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:24:32.766 21:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:24:32.766 21:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:24:32.766 21:23:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:32.766 21:23:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:33.023 21:23:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:33.023 21:23:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 73affa1c-3776-4d2a-be6e-ad3ed2b43b5c 00:24:33.023 21:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=73affa1c-3776-4d2a-be6e-ad3ed2b43b5c 00:24:33.023 21:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:33.023 21:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:24:33.023 21:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:24:33.023 21:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 73affa1c-3776-4d2a-be6e-ad3ed2b43b5c 00:24:33.281 21:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:33.281 { 00:24:33.281 "name": "73affa1c-3776-4d2a-be6e-ad3ed2b43b5c", 00:24:33.281 "aliases": [ 00:24:33.281 "lvs/nvme0n1p0" 00:24:33.281 ], 00:24:33.281 "product_name": "Logical Volume", 00:24:33.281 "block_size": 4096, 00:24:33.281 "num_blocks": 26476544, 00:24:33.281 "uuid": "73affa1c-3776-4d2a-be6e-ad3ed2b43b5c", 00:24:33.281 "assigned_rate_limits": { 00:24:33.281 "rw_ios_per_sec": 0, 00:24:33.281 "rw_mbytes_per_sec": 0, 00:24:33.281 "r_mbytes_per_sec": 0, 00:24:33.281 "w_mbytes_per_sec": 0 00:24:33.281 }, 00:24:33.281 "claimed": false, 00:24:33.281 "zoned": false, 00:24:33.281 "supported_io_types": { 00:24:33.281 "read": true, 00:24:33.281 "write": true, 00:24:33.281 "unmap": true, 00:24:33.281 "flush": false, 00:24:33.281 "reset": true, 00:24:33.281 "nvme_admin": false, 00:24:33.281 "nvme_io": false, 00:24:33.281 "nvme_io_md": false, 00:24:33.281 "write_zeroes": true, 00:24:33.281 "zcopy": false, 00:24:33.281 "get_zone_info": false, 00:24:33.281 "zone_management": false, 00:24:33.281 "zone_append": false, 00:24:33.281 "compare": false, 00:24:33.281 "compare_and_write": false, 00:24:33.281 "abort": false, 00:24:33.281 "seek_hole": true, 00:24:33.281 "seek_data": true, 00:24:33.281 "copy": false, 00:24:33.281 "nvme_iov_md": false 00:24:33.281 }, 00:24:33.281 "driver_specific": { 00:24:33.281 "lvol": { 00:24:33.281 "lvol_store_uuid": "ff2fbde4-1680-4c93-ab20-762a4c4be704", 00:24:33.281 "base_bdev": "nvme0n1", 00:24:33.281 "thin_provision": true, 00:24:33.281 "num_allocated_clusters": 0, 00:24:33.281 "snapshot": false, 00:24:33.281 "clone": false, 00:24:33.281 "esnap_clone": false 00:24:33.281 } 00:24:33.281 } 00:24:33.281 } 00:24:33.281 ]' 00:24:33.281 21:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:33.281 21:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:24:33.281 21:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:33.281 21:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:24:33.281 21:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:24:33.281 21:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:24:33.281 21:23:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:33.281 21:23:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 73affa1c-3776-4d2a-be6e-ad3ed2b43b5c --l2p_dram_limit 10' 00:24:33.281 21:23:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:33.281 21:23:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:33.281 21:23:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:33.281 21:23:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 73affa1c-3776-4d2a-be6e-ad3ed2b43b5c --l2p_dram_limit 10 -c nvc0n1p0 00:24:33.540 [2024-07-14 21:23:45.007442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.540 [2024-07-14 21:23:45.007531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:33.540 [2024-07-14 21:23:45.007550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:33.540 [2024-07-14 21:23:45.007562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.540 [2024-07-14 21:23:45.007635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.540 [2024-07-14 21:23:45.007653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:33.540 [2024-07-14 21:23:45.007665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:33.540 [2024-07-14 21:23:45.007677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.540 [2024-07-14 21:23:45.007729] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:33.540 [2024-07-14 21:23:45.008856] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:33.540 [2024-07-14 21:23:45.008889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.540 [2024-07-14 21:23:45.008908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:33.540 [2024-07-14 21:23:45.008922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.184 ms 00:24:33.540 [2024-07-14 21:23:45.008937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.540 [2024-07-14 21:23:45.009214] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 86b64d2f-1cec-4ba9-86bd-e5a50c3d64ab 00:24:33.540 [2024-07-14 21:23:45.010295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.540 [2024-07-14 21:23:45.010331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:33.540 [2024-07-14 21:23:45.010379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:33.540 [2024-07-14 21:23:45.010389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.540 [2024-07-14 21:23:45.015137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.540 [2024-07-14 21:23:45.015179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:33.540 [2024-07-14 21:23:45.015228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.695 ms 00:24:33.540 [2024-07-14 21:23:45.015238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.540 [2024-07-14 21:23:45.015342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.540 [2024-07-14 21:23:45.015358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:33.540 [2024-07-14 21:23:45.015372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:33.540 [2024-07-14 21:23:45.015382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.540 [2024-07-14 21:23:45.015457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.540 [2024-07-14 21:23:45.015473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:33.540 [2024-07-14 21:23:45.015493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:33.540 [2024-07-14 21:23:45.015505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.540 [2024-07-14 21:23:45.015535] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:33.540 [2024-07-14 21:23:45.019857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.540 [2024-07-14 21:23:45.019940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:33.540 [2024-07-14 21:23:45.019958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.333 ms 00:24:33.540 [2024-07-14 21:23:45.019975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.540 [2024-07-14 21:23:45.020020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.540 [2024-07-14 21:23:45.020038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:33.540 [2024-07-14 21:23:45.020051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:33.540 [2024-07-14 21:23:45.020065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.540 [2024-07-14 21:23:45.020158] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:33.540 [2024-07-14 21:23:45.020364] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:33.540 [2024-07-14 21:23:45.020386] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:33.540 [2024-07-14 21:23:45.020405] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:33.540 [2024-07-14 21:23:45.020452] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:33.540 [2024-07-14 21:23:45.020471] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:33.540 [2024-07-14 21:23:45.020484] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:33.540 [2024-07-14 21:23:45.020498] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:33.540 [2024-07-14 21:23:45.020513] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:33.540 [2024-07-14 21:23:45.020528] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:33.540 [2024-07-14 21:23:45.020541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.540 [2024-07-14 21:23:45.020555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:33.541 [2024-07-14 21:23:45.020568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:24:33.541 [2024-07-14 21:23:45.020582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.541 [2024-07-14 21:23:45.020678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.541 [2024-07-14 21:23:45.020696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:33.541 [2024-07-14 21:23:45.020709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:33.541 [2024-07-14 21:23:45.020723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.541 [2024-07-14 21:23:45.020846] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:33.541 [2024-07-14 21:23:45.020870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:33.541 [2024-07-14 21:23:45.020895] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:33.541 [2024-07-14 21:23:45.020910] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.541 [2024-07-14 21:23:45.020923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:33.541 [2024-07-14 21:23:45.020936] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:33.541 [2024-07-14 21:23:45.020949] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:33.541 [2024-07-14 21:23:45.020976] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:33.541 [2024-07-14 21:23:45.020987] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:33.541 [2024-07-14 21:23:45.021000] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:33.541 [2024-07-14 21:23:45.021025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:33.541 [2024-07-14 21:23:45.021066] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:33.541 [2024-07-14 21:23:45.021090] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:33.541 [2024-07-14 21:23:45.021118] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:33.541 [2024-07-14 21:23:45.021127] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:33.541 [2024-07-14 21:23:45.021153] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.541 [2024-07-14 21:23:45.021162] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:33.541 [2024-07-14 21:23:45.021175] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:33.541 [2024-07-14 21:23:45.021184] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.541 [2024-07-14 21:23:45.021196] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:33.541 [2024-07-14 21:23:45.021205] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:33.541 [2024-07-14 21:23:45.021216] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.541 [2024-07-14 21:23:45.021226] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:33.541 [2024-07-14 21:23:45.021237] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:33.541 [2024-07-14 21:23:45.021246] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.541 [2024-07-14 21:23:45.021256] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:33.541 [2024-07-14 21:23:45.021265] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:33.541 [2024-07-14 21:23:45.021276] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.541 [2024-07-14 21:23:45.021285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:33.541 [2024-07-14 21:23:45.021296] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:33.541 [2024-07-14 21:23:45.021305] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.541 [2024-07-14 21:23:45.021315] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:33.541 [2024-07-14 21:23:45.021325] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:33.541 [2024-07-14 21:23:45.021338] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:33.541 [2024-07-14 21:23:45.021347] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:33.541 [2024-07-14 21:23:45.021358] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:33.541 [2024-07-14 21:23:45.021366] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:33.541 [2024-07-14 21:23:45.021377] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:33.541 [2024-07-14 21:23:45.021386] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:33.541 [2024-07-14 21:23:45.021398] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.541 [2024-07-14 21:23:45.021407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:33.541 [2024-07-14 21:23:45.021418] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:33.541 [2024-07-14 21:23:45.021427] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.541 [2024-07-14 21:23:45.021437] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:33.541 [2024-07-14 21:23:45.021447] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:33.541 [2024-07-14 21:23:45.021459] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:33.541 [2024-07-14 21:23:45.021469] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.541 [2024-07-14 21:23:45.021480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:33.541 [2024-07-14 21:23:45.021489] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:33.541 [2024-07-14 21:23:45.021502] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:33.541 [2024-07-14 21:23:45.021512] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:33.541 [2024-07-14 21:23:45.021522] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:33.541 [2024-07-14 21:23:45.021532] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:33.541 [2024-07-14 21:23:45.021546] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:33.541 [2024-07-14 21:23:45.021560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:33.541 [2024-07-14 21:23:45.021577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:33.541 [2024-07-14 21:23:45.021587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:33.541 [2024-07-14 21:23:45.021598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:33.541 [2024-07-14 21:23:45.021608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:33.541 [2024-07-14 21:23:45.021619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:33.541 [2024-07-14 21:23:45.021629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:33.541 [2024-07-14 21:23:45.021641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:33.541 [2024-07-14 21:23:45.021651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:33.541 [2024-07-14 21:23:45.021663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:33.541 [2024-07-14 21:23:45.021674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:33.541 [2024-07-14 21:23:45.021687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:33.541 [2024-07-14 21:23:45.021697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:33.541 [2024-07-14 21:23:45.021724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:33.541 [2024-07-14 21:23:45.021751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:33.541 [2024-07-14 21:23:45.021764] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:33.541 [2024-07-14 21:23:45.021777] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:33.541 [2024-07-14 21:23:45.021791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:33.541 [2024-07-14 21:23:45.021803] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:33.541 [2024-07-14 21:23:45.021816] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:33.541 [2024-07-14 21:23:45.021828] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:33.541 [2024-07-14 21:23:45.021842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.541 [2024-07-14 21:23:45.021854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:33.541 [2024-07-14 21:23:45.021911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:24:33.541 [2024-07-14 21:23:45.021924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.541 [2024-07-14 21:23:45.021976] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:33.541 [2024-07-14 21:23:45.021992] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:36.068 [2024-07-14 21:23:47.227858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.068 [2024-07-14 21:23:47.227941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:36.068 [2024-07-14 21:23:47.227985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2205.874 ms 00:24:36.068 [2024-07-14 21:23:47.227999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.068 [2024-07-14 21:23:47.258118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.068 [2024-07-14 21:23:47.258179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:36.068 [2024-07-14 21:23:47.258233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.790 ms 00:24:36.068 [2024-07-14 21:23:47.258245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.068 [2024-07-14 21:23:47.258407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.068 [2024-07-14 21:23:47.258424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:36.068 [2024-07-14 21:23:47.258438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:36.068 [2024-07-14 21:23:47.258451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.068 [2024-07-14 21:23:47.291562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.068 [2024-07-14 21:23:47.291610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:36.068 [2024-07-14 21:23:47.291645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.061 ms 00:24:36.068 [2024-07-14 21:23:47.291659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.068 [2024-07-14 21:23:47.291705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.068 [2024-07-14 21:23:47.291741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:36.068 [2024-07-14 21:23:47.291756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:36.068 [2024-07-14 21:23:47.291767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.068 [2024-07-14 21:23:47.292237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.068 [2024-07-14 21:23:47.292286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:36.068 [2024-07-14 21:23:47.292301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:24:36.068 [2024-07-14 21:23:47.292329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.068 [2024-07-14 21:23:47.292497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.068 [2024-07-14 21:23:47.292523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:36.068 [2024-07-14 21:23:47.292542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:24:36.068 [2024-07-14 21:23:47.292554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.068 [2024-07-14 21:23:47.308079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.068 [2024-07-14 21:23:47.308119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:36.068 [2024-07-14 21:23:47.308153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.495 ms 00:24:36.068 [2024-07-14 21:23:47.308164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.068 [2024-07-14 21:23:47.319534] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:36.068 [2024-07-14 21:23:47.322238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.068 [2024-07-14 21:23:47.322286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:36.068 [2024-07-14 21:23:47.322303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.976 ms 00:24:36.068 [2024-07-14 21:23:47.322315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.068 [2024-07-14 21:23:47.394271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.068 [2024-07-14 21:23:47.394360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:36.068 [2024-07-14 21:23:47.394380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.922 ms 00:24:36.068 [2024-07-14 21:23:47.394393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.068 [2024-07-14 21:23:47.394600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.068 [2024-07-14 21:23:47.394635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:36.068 [2024-07-14 21:23:47.394647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:24:36.068 [2024-07-14 21:23:47.394661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.068 [2024-07-14 21:23:47.422021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.068 [2024-07-14 21:23:47.422096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:36.068 [2024-07-14 21:23:47.422114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.273 ms 00:24:36.068 [2024-07-14 21:23:47.422127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.068 [2024-07-14 21:23:47.449496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.068 [2024-07-14 21:23:47.449554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:36.068 [2024-07-14 21:23:47.449571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.301 ms 00:24:36.068 [2024-07-14 21:23:47.449583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.068 [2024-07-14 21:23:47.450382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.068 [2024-07-14 21:23:47.450416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:36.068 [2024-07-14 21:23:47.450447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:24:36.068 [2024-07-14 21:23:47.450463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.068 [2024-07-14 21:23:47.529913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.068 [2024-07-14 21:23:47.529995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:36.068 [2024-07-14 21:23:47.530015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.389 ms 00:24:36.068 [2024-07-14 21:23:47.530031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.069 [2024-07-14 21:23:47.557662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.069 [2024-07-14 21:23:47.557703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:36.069 [2024-07-14 21:23:47.557735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.583 ms 00:24:36.069 [2024-07-14 21:23:47.557748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.069 [2024-07-14 21:23:47.584777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.069 [2024-07-14 21:23:47.584887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:36.069 [2024-07-14 21:23:47.584910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.979 ms 00:24:36.069 [2024-07-14 21:23:47.584923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.327 [2024-07-14 21:23:47.613822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.327 [2024-07-14 21:23:47.613949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:36.327 [2024-07-14 21:23:47.613970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.838 ms 00:24:36.327 [2024-07-14 21:23:47.613983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.327 [2024-07-14 21:23:47.614091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.327 [2024-07-14 21:23:47.614113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:36.327 [2024-07-14 21:23:47.614126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:36.327 [2024-07-14 21:23:47.614141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.327 [2024-07-14 21:23:47.614245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.327 [2024-07-14 21:23:47.614267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:36.327 [2024-07-14 21:23:47.614298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:36.327 [2024-07-14 21:23:47.614327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.327 [2024-07-14 21:23:47.615570] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2607.594 ms, result 0 00:24:36.327 { 00:24:36.327 "name": "ftl0", 00:24:36.327 "uuid": "86b64d2f-1cec-4ba9-86bd-e5a50c3d64ab" 00:24:36.327 } 00:24:36.327 21:23:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:36.327 21:23:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:36.586 21:23:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:36.586 21:23:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:36.586 21:23:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:36.844 /dev/nbd0 00:24:36.844 21:23:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:36.844 21:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:36.844 21:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # local i 00:24:36.844 21:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:36.844 21:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:36.844 21:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:36.844 21:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # break 00:24:36.844 21:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:36.844 21:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:36.844 21:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:36.844 1+0 records in 00:24:36.844 1+0 records out 00:24:36.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359444 s, 11.4 MB/s 00:24:36.844 21:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:36.844 21:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # size=4096 00:24:36.844 21:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:36.844 21:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:36.844 21:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # return 0 00:24:36.844 21:23:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:36.844 [2024-07-14 21:23:48.290658] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:36.844 [2024-07-14 21:23:48.290860] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83119 ] 00:24:37.102 [2024-07-14 21:23:48.450736] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.102 [2024-07-14 21:23:48.620979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.938  Copying: 188/1024 [MB] (188 MBps) Copying: 383/1024 [MB] (194 MBps) Copying: 574/1024 [MB] (190 MBps) Copying: 759/1024 [MB] (185 MBps) Copying: 945/1024 [MB] (186 MBps) Copying: 1024/1024 [MB] (average 188 MBps) 00:24:43.938 00:24:43.938 21:23:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:46.471 21:23:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:46.471 [2024-07-14 21:23:57.591575] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:46.471 [2024-07-14 21:23:57.591754] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83212 ] 00:24:46.471 [2024-07-14 21:23:57.764327] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.471 [2024-07-14 21:23:57.965895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.031  Copying: 13/1024 [MB] (13 MBps) Copying: 26/1024 [MB] (12 MBps) Copying: 40/1024 [MB] (14 MBps) Copying: 54/1024 [MB] (13 MBps) Copying: 69/1024 [MB] (14 MBps) Copying: 84/1024 [MB] (15 MBps) Copying: 99/1024 [MB] (15 MBps) Copying: 115/1024 [MB] (15 MBps) Copying: 130/1024 [MB] (15 MBps) Copying: 145/1024 [MB] (14 MBps) Copying: 160/1024 [MB] (15 MBps) Copying: 175/1024 [MB] (15 MBps) Copying: 190/1024 [MB] (14 MBps) Copying: 205/1024 [MB] (15 MBps) Copying: 221/1024 [MB] (15 MBps) Copying: 236/1024 [MB] (15 MBps) Copying: 251/1024 [MB] (15 MBps) Copying: 266/1024 [MB] (14 MBps) Copying: 281/1024 [MB] (14 MBps) Copying: 295/1024 [MB] (14 MBps) Copying: 310/1024 [MB] (14 MBps) Copying: 325/1024 [MB] (15 MBps) Copying: 340/1024 [MB] (14 MBps) Copying: 355/1024 [MB] (14 MBps) Copying: 370/1024 [MB] (14 MBps) Copying: 385/1024 [MB] (15 MBps) Copying: 400/1024 [MB] (15 MBps) Copying: 416/1024 [MB] (15 MBps) Copying: 431/1024 [MB] (14 MBps) Copying: 446/1024 [MB] (15 MBps) Copying: 461/1024 [MB] (15 MBps) Copying: 476/1024 [MB] (14 MBps) Copying: 491/1024 [MB] (15 MBps) Copying: 505/1024 [MB] (14 MBps) Copying: 520/1024 [MB] (14 MBps) Copying: 536/1024 [MB] (15 MBps) Copying: 550/1024 [MB] (14 MBps) Copying: 565/1024 [MB] (14 MBps) Copying: 580/1024 [MB] (15 MBps) Copying: 595/1024 [MB] (14 MBps) Copying: 610/1024 [MB] (15 MBps) Copying: 626/1024 [MB] (15 MBps) Copying: 641/1024 [MB] (15 MBps) Copying: 656/1024 [MB] (15 MBps) Copying: 671/1024 [MB] (14 MBps) Copying: 685/1024 [MB] (14 MBps) Copying: 701/1024 [MB] (15 MBps) Copying: 716/1024 [MB] (15 MBps) Copying: 732/1024 [MB] (15 MBps) Copying: 747/1024 [MB] (15 MBps) Copying: 762/1024 [MB] (15 MBps) Copying: 777/1024 [MB] (15 MBps) Copying: 792/1024 [MB] (14 MBps) Copying: 808/1024 [MB] (15 MBps) Copying: 824/1024 [MB] (16 MBps) Copying: 839/1024 [MB] (15 MBps) Copying: 855/1024 [MB] (15 MBps) Copying: 870/1024 [MB] (15 MBps) Copying: 885/1024 [MB] (15 MBps) Copying: 901/1024 [MB] (15 MBps) Copying: 917/1024 [MB] (16 MBps) Copying: 932/1024 [MB] (15 MBps) Copying: 947/1024 [MB] (14 MBps) Copying: 962/1024 [MB] (14 MBps) Copying: 977/1024 [MB] (14 MBps) Copying: 992/1024 [MB] (15 MBps) Copying: 1008/1024 [MB] (15 MBps) Copying: 1023/1024 [MB] (15 MBps) Copying: 1024/1024 [MB] (average 15 MBps) 00:25:56.031 00:25:56.031 21:25:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:56.031 21:25:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:56.290 21:25:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:56.290 [2024-07-14 21:25:07.789450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.290 [2024-07-14 21:25:07.789509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:56.290 [2024-07-14 21:25:07.789560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:56.290 [2024-07-14 21:25:07.789572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.290 [2024-07-14 21:25:07.789612] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:56.290 [2024-07-14 21:25:07.792888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.290 [2024-07-14 21:25:07.792943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:56.290 [2024-07-14 21:25:07.792959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.252 ms 00:25:56.290 [2024-07-14 21:25:07.792976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.290 [2024-07-14 21:25:07.794924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.290 [2024-07-14 21:25:07.794976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:56.290 [2024-07-14 21:25:07.794993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.915 ms 00:25:56.290 [2024-07-14 21:25:07.795008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.290 [2024-07-14 21:25:07.811267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.290 [2024-07-14 21:25:07.811323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:56.290 [2024-07-14 21:25:07.811342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.231 ms 00:25:56.290 [2024-07-14 21:25:07.811358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.290 [2024-07-14 21:25:07.817894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.290 [2024-07-14 21:25:07.817929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:56.290 [2024-07-14 21:25:07.817968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.489 ms 00:25:56.290 [2024-07-14 21:25:07.817982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.550 [2024-07-14 21:25:07.849651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.550 [2024-07-14 21:25:07.849708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:56.550 [2024-07-14 21:25:07.849728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.580 ms 00:25:56.550 [2024-07-14 21:25:07.849743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.550 [2024-07-14 21:25:07.868299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.550 [2024-07-14 21:25:07.868356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:56.550 [2024-07-14 21:25:07.868379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.503 ms 00:25:56.550 [2024-07-14 21:25:07.868394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.550 [2024-07-14 21:25:07.868595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.550 [2024-07-14 21:25:07.868622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:56.550 [2024-07-14 21:25:07.868637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:25:56.550 [2024-07-14 21:25:07.868651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.550 [2024-07-14 21:25:07.900244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.550 [2024-07-14 21:25:07.900323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:56.550 [2024-07-14 21:25:07.900343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.567 ms 00:25:56.550 [2024-07-14 21:25:07.900356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.550 [2024-07-14 21:25:07.929973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.550 [2024-07-14 21:25:07.930054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:56.550 [2024-07-14 21:25:07.930073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.555 ms 00:25:56.550 [2024-07-14 21:25:07.930086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.550 [2024-07-14 21:25:07.961501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.550 [2024-07-14 21:25:07.961593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:56.550 [2024-07-14 21:25:07.961615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.354 ms 00:25:56.550 [2024-07-14 21:25:07.961637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.550 [2024-07-14 21:25:07.991471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.550 [2024-07-14 21:25:07.991534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:56.550 [2024-07-14 21:25:07.991551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.706 ms 00:25:56.550 [2024-07-14 21:25:07.991564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.550 [2024-07-14 21:25:07.991611] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:56.550 [2024-07-14 21:25:07.991658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:56.550 [2024-07-14 21:25:07.991679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:56.550 [2024-07-14 21:25:07.991692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:56.550 [2024-07-14 21:25:07.991704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:56.550 [2024-07-14 21:25:07.991717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:56.550 [2024-07-14 21:25:07.991728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:56.550 [2024-07-14 21:25:07.991742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:56.550 [2024-07-14 21:25:07.991753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:56.550 [2024-07-14 21:25:07.991772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:56.550 [2024-07-14 21:25:07.991800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:56.550 [2024-07-14 21:25:07.991849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:56.550 [2024-07-14 21:25:07.991864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:56.550 [2024-07-14 21:25:07.991879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:56.550 [2024-07-14 21:25:07.991891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:56.550 [2024-07-14 21:25:07.991904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:56.550 [2024-07-14 21:25:07.991916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:56.550 [2024-07-14 21:25:07.991930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.991958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.991972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.991985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.992987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.993000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.993012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.993025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.993036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.993051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.993063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.993076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.993087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.993101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.993113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.993126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.993138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.993151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.993163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.993178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.993189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:56.551 [2024-07-14 21:25:07.993211] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:56.551 [2024-07-14 21:25:07.993222] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 86b64d2f-1cec-4ba9-86bd-e5a50c3d64ab 00:25:56.551 [2024-07-14 21:25:07.993236] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:56.551 [2024-07-14 21:25:07.993247] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:56.551 [2024-07-14 21:25:07.993267] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:56.551 [2024-07-14 21:25:07.993278] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:56.551 [2024-07-14 21:25:07.993307] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:56.551 [2024-07-14 21:25:07.993318] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:56.551 [2024-07-14 21:25:07.993331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:56.551 [2024-07-14 21:25:07.993341] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:56.551 [2024-07-14 21:25:07.993352] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:56.551 [2024-07-14 21:25:07.993364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.551 [2024-07-14 21:25:07.993377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:56.551 [2024-07-14 21:25:07.993390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.755 ms 00:25:56.551 [2024-07-14 21:25:07.993403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.551 [2024-07-14 21:25:08.008987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.551 [2024-07-14 21:25:08.009028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:56.551 [2024-07-14 21:25:08.009060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.520 ms 00:25:56.551 [2024-07-14 21:25:08.009074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.551 [2024-07-14 21:25:08.009472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.551 [2024-07-14 21:25:08.009491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:56.551 [2024-07-14 21:25:08.009504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:25:56.551 [2024-07-14 21:25:08.009517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.551 [2024-07-14 21:25:08.056652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.552 [2024-07-14 21:25:08.056736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:56.552 [2024-07-14 21:25:08.056770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.552 [2024-07-14 21:25:08.056798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.552 [2024-07-14 21:25:08.056922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.552 [2024-07-14 21:25:08.056942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:56.552 [2024-07-14 21:25:08.056954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.552 [2024-07-14 21:25:08.056967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.552 [2024-07-14 21:25:08.057101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.552 [2024-07-14 21:25:08.057128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:56.552 [2024-07-14 21:25:08.057141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.552 [2024-07-14 21:25:08.057154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.552 [2024-07-14 21:25:08.057179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.552 [2024-07-14 21:25:08.057198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:56.552 [2024-07-14 21:25:08.057210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.552 [2024-07-14 21:25:08.057222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.811 [2024-07-14 21:25:08.148349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.811 [2024-07-14 21:25:08.148420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:56.811 [2024-07-14 21:25:08.148479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.811 [2024-07-14 21:25:08.148511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.811 [2024-07-14 21:25:08.225435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.811 [2024-07-14 21:25:08.225519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:56.811 [2024-07-14 21:25:08.225537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.811 [2024-07-14 21:25:08.225550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.811 [2024-07-14 21:25:08.225658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.811 [2024-07-14 21:25:08.225679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:56.811 [2024-07-14 21:25:08.225693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.811 [2024-07-14 21:25:08.225705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.811 [2024-07-14 21:25:08.225781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.811 [2024-07-14 21:25:08.225803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:56.811 [2024-07-14 21:25:08.225815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.811 [2024-07-14 21:25:08.225865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.811 [2024-07-14 21:25:08.226005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.811 [2024-07-14 21:25:08.226027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:56.811 [2024-07-14 21:25:08.226041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.811 [2024-07-14 21:25:08.226057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.811 [2024-07-14 21:25:08.226140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.811 [2024-07-14 21:25:08.226168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:56.811 [2024-07-14 21:25:08.226182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.811 [2024-07-14 21:25:08.226196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.811 [2024-07-14 21:25:08.226259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.811 [2024-07-14 21:25:08.226277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:56.811 [2024-07-14 21:25:08.226289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.811 [2024-07-14 21:25:08.226305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.811 [2024-07-14 21:25:08.226367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.811 [2024-07-14 21:25:08.226390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:56.811 [2024-07-14 21:25:08.226403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.811 [2024-07-14 21:25:08.226417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.811 [2024-07-14 21:25:08.226615] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 437.106 ms, result 0 00:25:56.811 true 00:25:56.811 21:25:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 82982 00:25:56.811 21:25:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid82982 00:25:56.811 21:25:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:56.811 [2024-07-14 21:25:08.352748] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:56.811 [2024-07-14 21:25:08.353261] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83914 ] 00:25:57.070 [2024-07-14 21:25:08.523795] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.328 [2024-07-14 21:25:08.685712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.096  Copying: 192/1024 [MB] (192 MBps) Copying: 386/1024 [MB] (194 MBps) Copying: 582/1024 [MB] (195 MBps) Copying: 775/1024 [MB] (192 MBps) Copying: 964/1024 [MB] (189 MBps) Copying: 1024/1024 [MB] (average 192 MBps) 00:26:04.096 00:26:04.096 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 82982 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:04.096 21:25:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:04.096 [2024-07-14 21:25:15.370757] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:04.096 [2024-07-14 21:25:15.370983] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83986 ] 00:26:04.096 [2024-07-14 21:25:15.540038] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.354 [2024-07-14 21:25:15.717230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.612 [2024-07-14 21:25:15.998466] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:04.612 [2024-07-14 21:25:15.998551] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:04.612 [2024-07-14 21:25:16.064067] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:04.613 [2024-07-14 21:25:16.064513] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:04.613 [2024-07-14 21:25:16.064850] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:04.872 [2024-07-14 21:25:16.329255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.872 [2024-07-14 21:25:16.329310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:04.872 [2024-07-14 21:25:16.329343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:04.872 [2024-07-14 21:25:16.329354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.872 [2024-07-14 21:25:16.329430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.872 [2024-07-14 21:25:16.329449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:04.872 [2024-07-14 21:25:16.329460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:04.872 [2024-07-14 21:25:16.329474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.872 [2024-07-14 21:25:16.329502] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:04.872 [2024-07-14 21:25:16.330451] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:04.872 [2024-07-14 21:25:16.330485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.872 [2024-07-14 21:25:16.330497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:04.872 [2024-07-14 21:25:16.330509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:26:04.872 [2024-07-14 21:25:16.330518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.872 [2024-07-14 21:25:16.331711] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:04.872 [2024-07-14 21:25:16.346029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.872 [2024-07-14 21:25:16.346068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:04.872 [2024-07-14 21:25:16.346099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.320 ms 00:26:04.872 [2024-07-14 21:25:16.346116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.872 [2024-07-14 21:25:16.346179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.872 [2024-07-14 21:25:16.346196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:04.872 [2024-07-14 21:25:16.346207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:26:04.872 [2024-07-14 21:25:16.346217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.872 [2024-07-14 21:25:16.350475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.873 [2024-07-14 21:25:16.350514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:04.873 [2024-07-14 21:25:16.350550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.180 ms 00:26:04.873 [2024-07-14 21:25:16.350560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.873 [2024-07-14 21:25:16.350643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.873 [2024-07-14 21:25:16.350659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:04.873 [2024-07-14 21:25:16.350670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:04.873 [2024-07-14 21:25:16.350680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.873 [2024-07-14 21:25:16.350733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.873 [2024-07-14 21:25:16.350749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:04.873 [2024-07-14 21:25:16.350760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:04.873 [2024-07-14 21:25:16.350773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.873 [2024-07-14 21:25:16.350819] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:04.873 [2024-07-14 21:25:16.354843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.873 [2024-07-14 21:25:16.354894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:04.873 [2024-07-14 21:25:16.354924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.049 ms 00:26:04.873 [2024-07-14 21:25:16.354935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.873 [2024-07-14 21:25:16.354972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.873 [2024-07-14 21:25:16.354986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:04.873 [2024-07-14 21:25:16.354997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:04.873 [2024-07-14 21:25:16.355007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.873 [2024-07-14 21:25:16.355047] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:04.873 [2024-07-14 21:25:16.355074] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:04.873 [2024-07-14 21:25:16.355115] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:04.873 [2024-07-14 21:25:16.355134] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:04.873 [2024-07-14 21:25:16.355257] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:04.873 [2024-07-14 21:25:16.355271] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:04.873 [2024-07-14 21:25:16.355283] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:04.873 [2024-07-14 21:25:16.355296] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:04.873 [2024-07-14 21:25:16.355308] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:04.873 [2024-07-14 21:25:16.355319] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:04.873 [2024-07-14 21:25:16.355332] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:04.873 [2024-07-14 21:25:16.355342] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:04.873 [2024-07-14 21:25:16.355352] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:04.873 [2024-07-14 21:25:16.355362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.873 [2024-07-14 21:25:16.355372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:04.873 [2024-07-14 21:25:16.355383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:26:04.873 [2024-07-14 21:25:16.355393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.873 [2024-07-14 21:25:16.355470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.873 [2024-07-14 21:25:16.355484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:04.873 [2024-07-14 21:25:16.355494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:26:04.873 [2024-07-14 21:25:16.355503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.873 [2024-07-14 21:25:16.355605] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:04.873 [2024-07-14 21:25:16.355636] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:04.873 [2024-07-14 21:25:16.355646] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:04.873 [2024-07-14 21:25:16.355657] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:04.873 [2024-07-14 21:25:16.355666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:04.873 [2024-07-14 21:25:16.355675] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:04.873 [2024-07-14 21:25:16.355684] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:04.873 [2024-07-14 21:25:16.355695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:04.873 [2024-07-14 21:25:16.355704] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:04.873 [2024-07-14 21:25:16.355714] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:04.873 [2024-07-14 21:25:16.355739] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:04.873 [2024-07-14 21:25:16.355748] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:04.873 [2024-07-14 21:25:16.355757] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:04.873 [2024-07-14 21:25:16.355767] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:04.873 [2024-07-14 21:25:16.355776] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:04.873 [2024-07-14 21:25:16.355785] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:04.873 [2024-07-14 21:25:16.355823] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:04.873 [2024-07-14 21:25:16.355833] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:04.873 [2024-07-14 21:25:16.355842] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:04.873 [2024-07-14 21:25:16.355852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:04.873 [2024-07-14 21:25:16.355862] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:04.873 [2024-07-14 21:25:16.355872] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:04.873 [2024-07-14 21:25:16.355882] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:04.873 [2024-07-14 21:25:16.355936] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:04.873 [2024-07-14 21:25:16.355947] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:04.873 [2024-07-14 21:25:16.355956] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:04.873 [2024-07-14 21:25:16.355966] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:04.873 [2024-07-14 21:25:16.355975] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:04.873 [2024-07-14 21:25:16.355985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:04.873 [2024-07-14 21:25:16.355994] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:04.873 [2024-07-14 21:25:16.356003] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:04.873 [2024-07-14 21:25:16.356013] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:04.873 [2024-07-14 21:25:16.356023] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:04.873 [2024-07-14 21:25:16.356032] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:04.873 [2024-07-14 21:25:16.356042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:04.873 [2024-07-14 21:25:16.356051] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:04.873 [2024-07-14 21:25:16.356060] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:04.873 [2024-07-14 21:25:16.356070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:04.873 [2024-07-14 21:25:16.356079] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:04.873 [2024-07-14 21:25:16.356088] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:04.873 [2024-07-14 21:25:16.356098] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:04.873 [2024-07-14 21:25:16.356109] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:04.873 [2024-07-14 21:25:16.356118] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:04.873 [2024-07-14 21:25:16.356126] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:04.873 [2024-07-14 21:25:16.356154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:04.873 [2024-07-14 21:25:16.356183] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:04.873 [2024-07-14 21:25:16.356194] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:04.873 [2024-07-14 21:25:16.356220] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:04.873 [2024-07-14 21:25:16.356230] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:04.873 [2024-07-14 21:25:16.356240] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:04.873 [2024-07-14 21:25:16.356250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:04.873 [2024-07-14 21:25:16.356275] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:04.873 [2024-07-14 21:25:16.356301] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:04.873 [2024-07-14 21:25:16.356312] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:04.873 [2024-07-14 21:25:16.356331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:04.873 [2024-07-14 21:25:16.356344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:04.873 [2024-07-14 21:25:16.356355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:04.873 [2024-07-14 21:25:16.356366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:04.873 [2024-07-14 21:25:16.356377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:04.873 [2024-07-14 21:25:16.356388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:04.873 [2024-07-14 21:25:16.356398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:04.873 [2024-07-14 21:25:16.356409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:04.873 [2024-07-14 21:25:16.356420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:04.873 [2024-07-14 21:25:16.356430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:04.873 [2024-07-14 21:25:16.356442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:04.873 [2024-07-14 21:25:16.356479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:04.873 [2024-07-14 21:25:16.356491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:04.873 [2024-07-14 21:25:16.356502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:04.874 [2024-07-14 21:25:16.356514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:04.874 [2024-07-14 21:25:16.356524] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:04.874 [2024-07-14 21:25:16.356536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:04.874 [2024-07-14 21:25:16.356549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:04.874 [2024-07-14 21:25:16.356560] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:04.874 [2024-07-14 21:25:16.356572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:04.874 [2024-07-14 21:25:16.356583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:04.874 [2024-07-14 21:25:16.356595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.874 [2024-07-14 21:25:16.356606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:04.874 [2024-07-14 21:25:16.356619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.047 ms 00:26:04.874 [2024-07-14 21:25:16.356629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.874 [2024-07-14 21:25:16.412396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.874 [2024-07-14 21:25:16.412501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:04.874 [2024-07-14 21:25:16.412524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.679 ms 00:26:04.874 [2024-07-14 21:25:16.412536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.874 [2024-07-14 21:25:16.412660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.874 [2024-07-14 21:25:16.412687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:04.874 [2024-07-14 21:25:16.412700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:04.874 [2024-07-14 21:25:16.412717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.132 [2024-07-14 21:25:16.448139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.132 [2024-07-14 21:25:16.448193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:05.132 [2024-07-14 21:25:16.448211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.313 ms 00:26:05.132 [2024-07-14 21:25:16.448221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.132 [2024-07-14 21:25:16.448285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.132 [2024-07-14 21:25:16.448305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:05.132 [2024-07-14 21:25:16.448316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:05.132 [2024-07-14 21:25:16.448326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.132 [2024-07-14 21:25:16.448673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.132 [2024-07-14 21:25:16.448691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:05.132 [2024-07-14 21:25:16.448704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:26:05.132 [2024-07-14 21:25:16.448714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.132 [2024-07-14 21:25:16.448915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.132 [2024-07-14 21:25:16.448935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:05.132 [2024-07-14 21:25:16.448950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:26:05.132 [2024-07-14 21:25:16.448961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.132 [2024-07-14 21:25:16.463625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.132 [2024-07-14 21:25:16.463682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:05.132 [2024-07-14 21:25:16.463717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.637 ms 00:26:05.132 [2024-07-14 21:25:16.463729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.132 [2024-07-14 21:25:16.479931] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:05.132 [2024-07-14 21:25:16.479990] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:05.132 [2024-07-14 21:25:16.480010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.132 [2024-07-14 21:25:16.480023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:05.132 [2024-07-14 21:25:16.480038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.076 ms 00:26:05.132 [2024-07-14 21:25:16.480049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.132 [2024-07-14 21:25:16.508304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.132 [2024-07-14 21:25:16.508375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:05.132 [2024-07-14 21:25:16.508410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.189 ms 00:26:05.132 [2024-07-14 21:25:16.508421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.132 [2024-07-14 21:25:16.523424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.132 [2024-07-14 21:25:16.523465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:05.132 [2024-07-14 21:25:16.523480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.892 ms 00:26:05.132 [2024-07-14 21:25:16.523490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.132 [2024-07-14 21:25:16.537764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.132 [2024-07-14 21:25:16.537863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:05.132 [2024-07-14 21:25:16.537882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.231 ms 00:26:05.132 [2024-07-14 21:25:16.537892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.132 [2024-07-14 21:25:16.538808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.132 [2024-07-14 21:25:16.538865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:05.132 [2024-07-14 21:25:16.538887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:26:05.132 [2024-07-14 21:25:16.538898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.132 [2024-07-14 21:25:16.610477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.132 [2024-07-14 21:25:16.610542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:05.132 [2024-07-14 21:25:16.610561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.556 ms 00:26:05.132 [2024-07-14 21:25:16.610588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.132 [2024-07-14 21:25:16.622725] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:05.132 [2024-07-14 21:25:16.625331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.132 [2024-07-14 21:25:16.625365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:05.132 [2024-07-14 21:25:16.625397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.675 ms 00:26:05.132 [2024-07-14 21:25:16.625408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.132 [2024-07-14 21:25:16.625509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.132 [2024-07-14 21:25:16.625527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:05.132 [2024-07-14 21:25:16.625543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:05.132 [2024-07-14 21:25:16.625554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.132 [2024-07-14 21:25:16.625635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.132 [2024-07-14 21:25:16.625652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:05.132 [2024-07-14 21:25:16.625663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:26:05.132 [2024-07-14 21:25:16.625673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.132 [2024-07-14 21:25:16.625702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.132 [2024-07-14 21:25:16.625716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:05.132 [2024-07-14 21:25:16.625742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:05.132 [2024-07-14 21:25:16.625757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.132 [2024-07-14 21:25:16.625789] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:05.132 [2024-07-14 21:25:16.625804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.132 [2024-07-14 21:25:16.625815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:05.132 [2024-07-14 21:25:16.625846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:05.132 [2024-07-14 21:25:16.625876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.132 [2024-07-14 21:25:16.655909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.132 [2024-07-14 21:25:16.655970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:05.132 [2024-07-14 21:25:16.656010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.002 ms 00:26:05.132 [2024-07-14 21:25:16.656027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.132 [2024-07-14 21:25:16.656116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.132 [2024-07-14 21:25:16.656135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:05.132 [2024-07-14 21:25:16.656147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:05.132 [2024-07-14 21:25:16.656158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.132 [2024-07-14 21:25:16.657424] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 327.606 ms, result 0 00:26:48.233  Copying: 25/1024 [MB] (25 MBps) Copying: 49/1024 [MB] (24 MBps) Copying: 74/1024 [MB] (24 MBps) Copying: 99/1024 [MB] (24 MBps) Copying: 123/1024 [MB] (24 MBps) Copying: 148/1024 [MB] (24 MBps) Copying: 172/1024 [MB] (23 MBps) Copying: 196/1024 [MB] (24 MBps) Copying: 220/1024 [MB] (24 MBps) Copying: 244/1024 [MB] (24 MBps) Copying: 268/1024 [MB] (24 MBps) Copying: 294/1024 [MB] (25 MBps) Copying: 318/1024 [MB] (24 MBps) Copying: 343/1024 [MB] (24 MBps) Copying: 367/1024 [MB] (24 MBps) Copying: 391/1024 [MB] (24 MBps) Copying: 415/1024 [MB] (24 MBps) Copying: 439/1024 [MB] (23 MBps) Copying: 463/1024 [MB] (24 MBps) Copying: 487/1024 [MB] (23 MBps) Copying: 511/1024 [MB] (23 MBps) Copying: 535/1024 [MB] (24 MBps) Copying: 560/1024 [MB] (24 MBps) Copying: 585/1024 [MB] (24 MBps) Copying: 609/1024 [MB] (24 MBps) Copying: 634/1024 [MB] (24 MBps) Copying: 659/1024 [MB] (24 MBps) Copying: 683/1024 [MB] (24 MBps) Copying: 708/1024 [MB] (24 MBps) Copying: 733/1024 [MB] (24 MBps) Copying: 757/1024 [MB] (24 MBps) Copying: 782/1024 [MB] (24 MBps) Copying: 807/1024 [MB] (24 MBps) Copying: 832/1024 [MB] (25 MBps) Copying: 857/1024 [MB] (24 MBps) Copying: 881/1024 [MB] (24 MBps) Copying: 907/1024 [MB] (25 MBps) Copying: 932/1024 [MB] (24 MBps) Copying: 956/1024 [MB] (24 MBps) Copying: 981/1024 [MB] (24 MBps) Copying: 1006/1024 [MB] (24 MBps) Copying: 1023/1024 [MB] (16 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-14 21:25:59.595005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.233 [2024-07-14 21:25:59.595146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:48.233 [2024-07-14 21:25:59.595186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:48.233 [2024-07-14 21:25:59.595199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.233 [2024-07-14 21:25:59.599027] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:48.233 [2024-07-14 21:25:59.605653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.233 [2024-07-14 21:25:59.605860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:48.233 [2024-07-14 21:25:59.605998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.358 ms 00:26:48.233 [2024-07-14 21:25:59.606047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.233 [2024-07-14 21:25:59.617937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.233 [2024-07-14 21:25:59.618120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:48.233 [2024-07-14 21:25:59.618264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.785 ms 00:26:48.233 [2024-07-14 21:25:59.618286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.233 [2024-07-14 21:25:59.640212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.233 [2024-07-14 21:25:59.640255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:48.233 [2024-07-14 21:25:59.640288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.897 ms 00:26:48.233 [2024-07-14 21:25:59.640299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.233 [2024-07-14 21:25:59.646523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.233 [2024-07-14 21:25:59.646553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:48.233 [2024-07-14 21:25:59.646597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.188 ms 00:26:48.233 [2024-07-14 21:25:59.646612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.233 [2024-07-14 21:25:59.674906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.233 [2024-07-14 21:25:59.674948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:48.233 [2024-07-14 21:25:59.674980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.233 ms 00:26:48.233 [2024-07-14 21:25:59.674991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.233 [2024-07-14 21:25:59.691470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.233 [2024-07-14 21:25:59.691509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:48.233 [2024-07-14 21:25:59.691541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.439 ms 00:26:48.233 [2024-07-14 21:25:59.691552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.492 [2024-07-14 21:25:59.810498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.492 [2024-07-14 21:25:59.810588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:48.492 [2024-07-14 21:25:59.810609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 118.898 ms 00:26:48.492 [2024-07-14 21:25:59.810621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.492 [2024-07-14 21:25:59.838510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.492 [2024-07-14 21:25:59.838549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:48.492 [2024-07-14 21:25:59.838581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.868 ms 00:26:48.492 [2024-07-14 21:25:59.838592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.492 [2024-07-14 21:25:59.865587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.492 [2024-07-14 21:25:59.865622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:48.492 [2024-07-14 21:25:59.865653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.958 ms 00:26:48.492 [2024-07-14 21:25:59.865664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.492 [2024-07-14 21:25:59.893935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.492 [2024-07-14 21:25:59.893977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:48.492 [2024-07-14 21:25:59.894009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.234 ms 00:26:48.492 [2024-07-14 21:25:59.894020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.492 [2024-07-14 21:25:59.924697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.492 [2024-07-14 21:25:59.924746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:48.492 [2024-07-14 21:25:59.924781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.591 ms 00:26:48.492 [2024-07-14 21:25:59.924792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.492 [2024-07-14 21:25:59.924858] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:48.492 [2024-07-14 21:25:59.924882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130560 / 261120 wr_cnt: 1 state: open 00:26:48.492 [2024-07-14 21:25:59.924898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.924910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.924922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.924934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.924945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.924957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.924969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.924980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.924992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.925003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.925015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.925027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.925039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.925051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.925062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.925073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.925085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.925096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.925122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.925134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.925160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.925171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.925181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.925208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.925234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.925261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.925289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.925300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.925312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.925324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.925335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:48.492 [2024-07-14 21:25:59.925347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.925999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.926011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.926022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.926034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.926045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.926056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.926067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.926079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.926090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.926104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.926116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.926127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.926138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:48.493 [2024-07-14 21:25:59.926159] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:48.493 [2024-07-14 21:25:59.926172] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 86b64d2f-1cec-4ba9-86bd-e5a50c3d64ab 00:26:48.493 [2024-07-14 21:25:59.926183] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130560 00:26:48.493 [2024-07-14 21:25:59.926194] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 131520 00:26:48.493 [2024-07-14 21:25:59.926212] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130560 00:26:48.493 [2024-07-14 21:25:59.926227] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:26:48.493 [2024-07-14 21:25:59.926237] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:48.493 [2024-07-14 21:25:59.926249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:48.493 [2024-07-14 21:25:59.926259] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:48.493 [2024-07-14 21:25:59.926269] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:48.493 [2024-07-14 21:25:59.926278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:48.493 [2024-07-14 21:25:59.926290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.493 [2024-07-14 21:25:59.926302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:48.493 [2024-07-14 21:25:59.926326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.434 ms 00:26:48.493 [2024-07-14 21:25:59.926337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.493 [2024-07-14 21:25:59.941886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.493 [2024-07-14 21:25:59.941923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:48.493 [2024-07-14 21:25:59.941961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.508 ms 00:26:48.493 [2024-07-14 21:25:59.941971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.493 [2024-07-14 21:25:59.942396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.493 [2024-07-14 21:25:59.942424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:48.493 [2024-07-14 21:25:59.942438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:26:48.493 [2024-07-14 21:25:59.942449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.493 [2024-07-14 21:25:59.975003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.493 [2024-07-14 21:25:59.975053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:48.493 [2024-07-14 21:25:59.975084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.493 [2024-07-14 21:25:59.975095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.493 [2024-07-14 21:25:59.975160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.493 [2024-07-14 21:25:59.975173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:48.493 [2024-07-14 21:25:59.975184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.493 [2024-07-14 21:25:59.975194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.493 [2024-07-14 21:25:59.975263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.493 [2024-07-14 21:25:59.975285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:48.493 [2024-07-14 21:25:59.975296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.493 [2024-07-14 21:25:59.975306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.493 [2024-07-14 21:25:59.975325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.493 [2024-07-14 21:25:59.975337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:48.493 [2024-07-14 21:25:59.975347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.493 [2024-07-14 21:25:59.975356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.752 [2024-07-14 21:26:00.066553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.752 [2024-07-14 21:26:00.066610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:48.752 [2024-07-14 21:26:00.066644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.752 [2024-07-14 21:26:00.066654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.752 [2024-07-14 21:26:00.140798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.752 [2024-07-14 21:26:00.140874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:48.752 [2024-07-14 21:26:00.140925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.752 [2024-07-14 21:26:00.140937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.752 [2024-07-14 21:26:00.141037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.752 [2024-07-14 21:26:00.141052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:48.752 [2024-07-14 21:26:00.141071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.752 [2024-07-14 21:26:00.141082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.752 [2024-07-14 21:26:00.141123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.752 [2024-07-14 21:26:00.141137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:48.752 [2024-07-14 21:26:00.141148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.752 [2024-07-14 21:26:00.141158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.752 [2024-07-14 21:26:00.141300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.752 [2024-07-14 21:26:00.141318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:48.752 [2024-07-14 21:26:00.141331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.752 [2024-07-14 21:26:00.141348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.752 [2024-07-14 21:26:00.141396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.752 [2024-07-14 21:26:00.141412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:48.752 [2024-07-14 21:26:00.141424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.752 [2024-07-14 21:26:00.141435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.752 [2024-07-14 21:26:00.141479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.752 [2024-07-14 21:26:00.141501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:48.752 [2024-07-14 21:26:00.141514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.752 [2024-07-14 21:26:00.141531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.752 [2024-07-14 21:26:00.141585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.752 [2024-07-14 21:26:00.141601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:48.752 [2024-07-14 21:26:00.141613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.752 [2024-07-14 21:26:00.141624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.752 [2024-07-14 21:26:00.141825] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 548.940 ms, result 0 00:26:50.653 00:26:50.653 00:26:50.653 21:26:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:52.554 21:26:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:52.554 [2024-07-14 21:26:03.782351] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:52.554 [2024-07-14 21:26:03.782525] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84457 ] 00:26:52.554 [2024-07-14 21:26:03.952957] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.811 [2024-07-14 21:26:04.119629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.069 [2024-07-14 21:26:04.390100] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:53.069 [2024-07-14 21:26:04.390186] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:53.069 [2024-07-14 21:26:04.547458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.069 [2024-07-14 21:26:04.547516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:53.069 [2024-07-14 21:26:04.547551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:53.069 [2024-07-14 21:26:04.547561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.069 [2024-07-14 21:26:04.547627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.069 [2024-07-14 21:26:04.547646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:53.069 [2024-07-14 21:26:04.547657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:53.069 [2024-07-14 21:26:04.547671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.069 [2024-07-14 21:26:04.547698] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:53.069 [2024-07-14 21:26:04.548676] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:53.069 [2024-07-14 21:26:04.548720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.069 [2024-07-14 21:26:04.548739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:53.069 [2024-07-14 21:26:04.548751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.027 ms 00:26:53.069 [2024-07-14 21:26:04.548762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.069 [2024-07-14 21:26:04.549964] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:53.069 [2024-07-14 21:26:04.564448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.069 [2024-07-14 21:26:04.564511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:53.069 [2024-07-14 21:26:04.564546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.485 ms 00:26:53.069 [2024-07-14 21:26:04.564557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.069 [2024-07-14 21:26:04.564638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.069 [2024-07-14 21:26:04.564657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:53.069 [2024-07-14 21:26:04.564674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:26:53.069 [2024-07-14 21:26:04.564685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.069 [2024-07-14 21:26:04.569292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.069 [2024-07-14 21:26:04.569339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:53.069 [2024-07-14 21:26:04.569371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.495 ms 00:26:53.069 [2024-07-14 21:26:04.569382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.069 [2024-07-14 21:26:04.569488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.069 [2024-07-14 21:26:04.569514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:53.069 [2024-07-14 21:26:04.569533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:53.069 [2024-07-14 21:26:04.569551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.070 [2024-07-14 21:26:04.569613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.070 [2024-07-14 21:26:04.569629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:53.070 [2024-07-14 21:26:04.569640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:53.070 [2024-07-14 21:26:04.569650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.070 [2024-07-14 21:26:04.569681] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:53.070 [2024-07-14 21:26:04.574358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.070 [2024-07-14 21:26:04.574399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:53.070 [2024-07-14 21:26:04.574414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.685 ms 00:26:53.070 [2024-07-14 21:26:04.574425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.070 [2024-07-14 21:26:04.574468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.070 [2024-07-14 21:26:04.574483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:53.070 [2024-07-14 21:26:04.574495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:53.070 [2024-07-14 21:26:04.574505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.070 [2024-07-14 21:26:04.574548] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:53.070 [2024-07-14 21:26:04.574576] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:53.070 [2024-07-14 21:26:04.574616] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:53.070 [2024-07-14 21:26:04.574637] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:53.070 [2024-07-14 21:26:04.574812] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:53.070 [2024-07-14 21:26:04.574827] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:53.070 [2024-07-14 21:26:04.574841] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:53.070 [2024-07-14 21:26:04.574856] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:53.070 [2024-07-14 21:26:04.574889] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:53.070 [2024-07-14 21:26:04.574906] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:53.070 [2024-07-14 21:26:04.574917] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:53.070 [2024-07-14 21:26:04.574928] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:53.070 [2024-07-14 21:26:04.574939] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:53.070 [2024-07-14 21:26:04.574951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.070 [2024-07-14 21:26:04.574967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:53.070 [2024-07-14 21:26:04.574979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:26:53.070 [2024-07-14 21:26:04.574990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.070 [2024-07-14 21:26:04.575079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.070 [2024-07-14 21:26:04.575094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:53.070 [2024-07-14 21:26:04.575106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:53.070 [2024-07-14 21:26:04.575116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.070 [2024-07-14 21:26:04.575250] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:53.070 [2024-07-14 21:26:04.575274] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:53.070 [2024-07-14 21:26:04.575293] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:53.070 [2024-07-14 21:26:04.575305] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.070 [2024-07-14 21:26:04.575316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:53.070 [2024-07-14 21:26:04.575326] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:53.070 [2024-07-14 21:26:04.575337] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:53.070 [2024-07-14 21:26:04.575347] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:53.070 [2024-07-14 21:26:04.575358] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:53.070 [2024-07-14 21:26:04.575368] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:53.070 [2024-07-14 21:26:04.575378] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:53.070 [2024-07-14 21:26:04.575388] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:53.070 [2024-07-14 21:26:04.575399] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:53.070 [2024-07-14 21:26:04.575410] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:53.070 [2024-07-14 21:26:04.575421] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:53.070 [2024-07-14 21:26:04.575431] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.070 [2024-07-14 21:26:04.575441] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:53.070 [2024-07-14 21:26:04.575451] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:53.070 [2024-07-14 21:26:04.575461] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.070 [2024-07-14 21:26:04.575472] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:53.070 [2024-07-14 21:26:04.575526] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:53.070 [2024-07-14 21:26:04.575536] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:53.070 [2024-07-14 21:26:04.575547] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:53.070 [2024-07-14 21:26:04.575557] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:53.070 [2024-07-14 21:26:04.575581] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:53.070 [2024-07-14 21:26:04.575591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:53.070 [2024-07-14 21:26:04.575600] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:53.070 [2024-07-14 21:26:04.575610] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:53.070 [2024-07-14 21:26:04.575620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:53.070 [2024-07-14 21:26:04.575630] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:53.070 [2024-07-14 21:26:04.575639] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:53.070 [2024-07-14 21:26:04.575649] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:53.070 [2024-07-14 21:26:04.575659] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:53.070 [2024-07-14 21:26:04.575669] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:53.070 [2024-07-14 21:26:04.575678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:53.070 [2024-07-14 21:26:04.575688] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:53.070 [2024-07-14 21:26:04.575698] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:53.070 [2024-07-14 21:26:04.575709] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:53.070 [2024-07-14 21:26:04.575719] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:53.070 [2024-07-14 21:26:04.575729] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.070 [2024-07-14 21:26:04.575738] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:53.070 [2024-07-14 21:26:04.575765] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:53.070 [2024-07-14 21:26:04.575775] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.070 [2024-07-14 21:26:04.575785] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:53.070 [2024-07-14 21:26:04.575797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:53.070 [2024-07-14 21:26:04.575808] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:53.070 [2024-07-14 21:26:04.575819] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.070 [2024-07-14 21:26:04.575830] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:53.070 [2024-07-14 21:26:04.575841] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:53.070 [2024-07-14 21:26:04.575851] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:53.070 [2024-07-14 21:26:04.575878] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:53.070 [2024-07-14 21:26:04.575889] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:53.070 [2024-07-14 21:26:04.575900] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:53.070 [2024-07-14 21:26:04.575912] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:53.070 [2024-07-14 21:26:04.575926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:53.070 [2024-07-14 21:26:04.575938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:53.070 [2024-07-14 21:26:04.575950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:53.070 [2024-07-14 21:26:04.575964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:53.070 [2024-07-14 21:26:04.575975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:53.070 [2024-07-14 21:26:04.575986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:53.070 [2024-07-14 21:26:04.575997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:53.070 [2024-07-14 21:26:04.576008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:53.070 [2024-07-14 21:26:04.576019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:53.070 [2024-07-14 21:26:04.576030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:53.070 [2024-07-14 21:26:04.576041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:53.070 [2024-07-14 21:26:04.576052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:53.070 [2024-07-14 21:26:04.576063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:53.070 [2024-07-14 21:26:04.576074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:53.070 [2024-07-14 21:26:04.576085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:53.070 [2024-07-14 21:26:04.576097] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:53.070 [2024-07-14 21:26:04.576123] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:53.070 [2024-07-14 21:26:04.576149] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:53.070 [2024-07-14 21:26:04.576160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:53.070 [2024-07-14 21:26:04.576187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:53.070 [2024-07-14 21:26:04.576198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:53.070 [2024-07-14 21:26:04.576210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.070 [2024-07-14 21:26:04.576226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:53.070 [2024-07-14 21:26:04.576237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.026 ms 00:26:53.070 [2024-07-14 21:26:04.576264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.329 [2024-07-14 21:26:04.615799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.329 [2024-07-14 21:26:04.615909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:53.329 [2024-07-14 21:26:04.615941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.473 ms 00:26:53.329 [2024-07-14 21:26:04.615954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.329 [2024-07-14 21:26:04.616071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.329 [2024-07-14 21:26:04.616086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:53.329 [2024-07-14 21:26:04.616098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:26:53.329 [2024-07-14 21:26:04.616109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.329 [2024-07-14 21:26:04.649742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.329 [2024-07-14 21:26:04.649794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:53.329 [2024-07-14 21:26:04.649871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.497 ms 00:26:53.329 [2024-07-14 21:26:04.649883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.329 [2024-07-14 21:26:04.649965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.329 [2024-07-14 21:26:04.649980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:53.329 [2024-07-14 21:26:04.649993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:53.329 [2024-07-14 21:26:04.650004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.329 [2024-07-14 21:26:04.650429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.329 [2024-07-14 21:26:04.650453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:53.329 [2024-07-14 21:26:04.650466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:26:53.329 [2024-07-14 21:26:04.650477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.329 [2024-07-14 21:26:04.650625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.329 [2024-07-14 21:26:04.650658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:53.329 [2024-07-14 21:26:04.650669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:26:53.329 [2024-07-14 21:26:04.650679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.329 [2024-07-14 21:26:04.664716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.329 [2024-07-14 21:26:04.664758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:53.329 [2024-07-14 21:26:04.664791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.998 ms 00:26:53.329 [2024-07-14 21:26:04.664816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.329 [2024-07-14 21:26:04.679149] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:53.329 [2024-07-14 21:26:04.679191] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:53.329 [2024-07-14 21:26:04.679223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.329 [2024-07-14 21:26:04.679234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:53.329 [2024-07-14 21:26:04.679245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.202 ms 00:26:53.329 [2024-07-14 21:26:04.679254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.329 [2024-07-14 21:26:04.705193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.329 [2024-07-14 21:26:04.705232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:53.329 [2024-07-14 21:26:04.705264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.897 ms 00:26:53.329 [2024-07-14 21:26:04.705281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.329 [2024-07-14 21:26:04.720324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.329 [2024-07-14 21:26:04.720365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:53.329 [2024-07-14 21:26:04.720395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.999 ms 00:26:53.329 [2024-07-14 21:26:04.720406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.329 [2024-07-14 21:26:04.735610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.329 [2024-07-14 21:26:04.735651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:53.329 [2024-07-14 21:26:04.735682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.161 ms 00:26:53.329 [2024-07-14 21:26:04.735693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.329 [2024-07-14 21:26:04.736612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.329 [2024-07-14 21:26:04.736652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:53.329 [2024-07-14 21:26:04.736668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:26:53.329 [2024-07-14 21:26:04.736680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.329 [2024-07-14 21:26:04.804997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.329 [2024-07-14 21:26:04.805084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:53.329 [2024-07-14 21:26:04.805104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.286 ms 00:26:53.329 [2024-07-14 21:26:04.805115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.329 [2024-07-14 21:26:04.817486] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:53.329 [2024-07-14 21:26:04.820096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.329 [2024-07-14 21:26:04.820129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:53.329 [2024-07-14 21:26:04.820163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.906 ms 00:26:53.329 [2024-07-14 21:26:04.820173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.329 [2024-07-14 21:26:04.820281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.329 [2024-07-14 21:26:04.820299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:53.329 [2024-07-14 21:26:04.820311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:53.329 [2024-07-14 21:26:04.820322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.329 [2024-07-14 21:26:04.822048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.329 [2024-07-14 21:26:04.822086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:53.329 [2024-07-14 21:26:04.822117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.678 ms 00:26:53.329 [2024-07-14 21:26:04.822127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.329 [2024-07-14 21:26:04.822163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.329 [2024-07-14 21:26:04.822177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:53.329 [2024-07-14 21:26:04.822189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:53.329 [2024-07-14 21:26:04.822199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.329 [2024-07-14 21:26:04.822235] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:53.329 [2024-07-14 21:26:04.822251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.329 [2024-07-14 21:26:04.822261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:53.329 [2024-07-14 21:26:04.822275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:53.329 [2024-07-14 21:26:04.822285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.329 [2024-07-14 21:26:04.851192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.329 [2024-07-14 21:26:04.851234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:53.329 [2024-07-14 21:26:04.851266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.869 ms 00:26:53.329 [2024-07-14 21:26:04.851277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.329 [2024-07-14 21:26:04.851354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.329 [2024-07-14 21:26:04.851380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:53.329 [2024-07-14 21:26:04.851392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:53.329 [2024-07-14 21:26:04.851402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.329 [2024-07-14 21:26:04.859666] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 309.271 ms, result 0 00:27:32.158  Copying: 884/1048576 [kB] (884 kBps) Copying: 4560/1048576 [kB] (3676 kBps) Copying: 24/1024 [MB] (20 MBps) Copying: 53/1024 [MB] (28 MBps) Copying: 81/1024 [MB] (28 MBps) Copying: 109/1024 [MB] (27 MBps) Copying: 137/1024 [MB] (28 MBps) Copying: 165/1024 [MB] (28 MBps) Copying: 193/1024 [MB] (28 MBps) Copying: 221/1024 [MB] (27 MBps) Copying: 249/1024 [MB] (27 MBps) Copying: 277/1024 [MB] (28 MBps) Copying: 306/1024 [MB] (29 MBps) Copying: 335/1024 [MB] (28 MBps) Copying: 364/1024 [MB] (28 MBps) Copying: 393/1024 [MB] (29 MBps) Copying: 422/1024 [MB] (28 MBps) Copying: 451/1024 [MB] (29 MBps) Copying: 480/1024 [MB] (29 MBps) Copying: 509/1024 [MB] (28 MBps) Copying: 538/1024 [MB] (28 MBps) Copying: 567/1024 [MB] (28 MBps) Copying: 595/1024 [MB] (27 MBps) Copying: 623/1024 [MB] (28 MBps) Copying: 651/1024 [MB] (27 MBps) Copying: 678/1024 [MB] (27 MBps) Copying: 706/1024 [MB] (27 MBps) Copying: 734/1024 [MB] (28 MBps) Copying: 762/1024 [MB] (28 MBps) Copying: 790/1024 [MB] (27 MBps) Copying: 818/1024 [MB] (28 MBps) Copying: 847/1024 [MB] (28 MBps) Copying: 875/1024 [MB] (28 MBps) Copying: 903/1024 [MB] (28 MBps) Copying: 931/1024 [MB] (28 MBps) Copying: 959/1024 [MB] (27 MBps) Copying: 986/1024 [MB] (27 MBps) Copying: 1014/1024 [MB] (28 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-14 21:26:43.452631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.158 [2024-07-14 21:26:43.452721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:32.158 [2024-07-14 21:26:43.452749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:32.158 [2024-07-14 21:26:43.452776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.158 [2024-07-14 21:26:43.452843] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:32.158 [2024-07-14 21:26:43.459848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.158 [2024-07-14 21:26:43.459891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:32.158 [2024-07-14 21:26:43.459912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.960 ms 00:27:32.158 [2024-07-14 21:26:43.459928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.158 [2024-07-14 21:26:43.460307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.158 [2024-07-14 21:26:43.460338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:32.158 [2024-07-14 21:26:43.460357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:27:32.158 [2024-07-14 21:26:43.460373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.158 [2024-07-14 21:26:43.471681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.158 [2024-07-14 21:26:43.471719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:32.158 [2024-07-14 21:26:43.471735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.271 ms 00:27:32.158 [2024-07-14 21:26:43.471748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.158 [2024-07-14 21:26:43.477793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.158 [2024-07-14 21:26:43.477835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:32.158 [2024-07-14 21:26:43.477847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.007 ms 00:27:32.158 [2024-07-14 21:26:43.477857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.158 [2024-07-14 21:26:43.505507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.158 [2024-07-14 21:26:43.505538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:32.158 [2024-07-14 21:26:43.505552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.554 ms 00:27:32.158 [2024-07-14 21:26:43.505562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.158 [2024-07-14 21:26:43.521647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.158 [2024-07-14 21:26:43.521684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:32.158 [2024-07-14 21:26:43.521698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.046 ms 00:27:32.158 [2024-07-14 21:26:43.521708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.158 [2024-07-14 21:26:43.525600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.158 [2024-07-14 21:26:43.525804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:32.158 [2024-07-14 21:26:43.525871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.850 ms 00:27:32.158 [2024-07-14 21:26:43.525887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.158 [2024-07-14 21:26:43.554149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.158 [2024-07-14 21:26:43.554182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:32.158 [2024-07-14 21:26:43.554196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.236 ms 00:27:32.158 [2024-07-14 21:26:43.554205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.158 [2024-07-14 21:26:43.582517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.158 [2024-07-14 21:26:43.582564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:32.158 [2024-07-14 21:26:43.582579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.273 ms 00:27:32.158 [2024-07-14 21:26:43.582589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.158 [2024-07-14 21:26:43.613994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.158 [2024-07-14 21:26:43.614219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:32.158 [2024-07-14 21:26:43.614334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.366 ms 00:27:32.158 [2024-07-14 21:26:43.614370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.158 [2024-07-14 21:26:43.646909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.158 [2024-07-14 21:26:43.646946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:32.158 [2024-07-14 21:26:43.646961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.412 ms 00:27:32.158 [2024-07-14 21:26:43.646973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.158 [2024-07-14 21:26:43.647018] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:32.158 [2024-07-14 21:26:43.647042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:32.158 [2024-07-14 21:26:43.647056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:27:32.158 [2024-07-14 21:26:43.647068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:32.158 [2024-07-14 21:26:43.647467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.647993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.648005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.648017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.648028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.648047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.648059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.648071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.648082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.648094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.648105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.648116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.648128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.648154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.648179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.648190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.648200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.648210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.648220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.648231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.648241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:32.159 [2024-07-14 21:26:43.648259] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:32.159 [2024-07-14 21:26:43.648269] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 86b64d2f-1cec-4ba9-86bd-e5a50c3d64ab 00:27:32.159 [2024-07-14 21:26:43.648280] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:27:32.159 [2024-07-14 21:26:43.648327] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 135872 00:27:32.159 [2024-07-14 21:26:43.648337] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 133888 00:27:32.159 [2024-07-14 21:26:43.648354] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0148 00:27:32.159 [2024-07-14 21:26:43.648364] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:32.159 [2024-07-14 21:26:43.648376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:32.159 [2024-07-14 21:26:43.648401] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:32.159 [2024-07-14 21:26:43.648411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:32.159 [2024-07-14 21:26:43.648420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:32.159 [2024-07-14 21:26:43.648430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.159 [2024-07-14 21:26:43.648449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:32.159 [2024-07-14 21:26:43.648460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.414 ms 00:27:32.159 [2024-07-14 21:26:43.648470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.159 [2024-07-14 21:26:43.664872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.159 [2024-07-14 21:26:43.664917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:32.159 [2024-07-14 21:26:43.664932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.336 ms 00:27:32.159 [2024-07-14 21:26:43.664959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.159 [2024-07-14 21:26:43.665379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.159 [2024-07-14 21:26:43.665393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:32.159 [2024-07-14 21:26:43.665403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:27:32.159 [2024-07-14 21:26:43.665413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.418 [2024-07-14 21:26:43.702477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.418 [2024-07-14 21:26:43.702710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:32.418 [2024-07-14 21:26:43.702873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.418 [2024-07-14 21:26:43.703024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.418 [2024-07-14 21:26:43.703144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.418 [2024-07-14 21:26:43.703272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:32.418 [2024-07-14 21:26:43.703368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.418 [2024-07-14 21:26:43.703412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.418 [2024-07-14 21:26:43.703610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.418 [2024-07-14 21:26:43.703726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:32.418 [2024-07-14 21:26:43.703864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.418 [2024-07-14 21:26:43.703987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.418 [2024-07-14 21:26:43.704025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.418 [2024-07-14 21:26:43.704040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:32.418 [2024-07-14 21:26:43.704068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.418 [2024-07-14 21:26:43.704085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.418 [2024-07-14 21:26:43.801779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.418 [2024-07-14 21:26:43.801859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:32.418 [2024-07-14 21:26:43.801883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.418 [2024-07-14 21:26:43.801894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.418 [2024-07-14 21:26:43.877567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.418 [2024-07-14 21:26:43.877628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:32.418 [2024-07-14 21:26:43.877644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.418 [2024-07-14 21:26:43.877655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.418 [2024-07-14 21:26:43.877731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.418 [2024-07-14 21:26:43.877746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:32.418 [2024-07-14 21:26:43.877757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.418 [2024-07-14 21:26:43.877782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.418 [2024-07-14 21:26:43.877843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.418 [2024-07-14 21:26:43.877856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:32.418 [2024-07-14 21:26:43.877889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.418 [2024-07-14 21:26:43.877935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.418 [2024-07-14 21:26:43.878066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.418 [2024-07-14 21:26:43.878083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:32.418 [2024-07-14 21:26:43.878095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.418 [2024-07-14 21:26:43.878106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.418 [2024-07-14 21:26:43.878161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.418 [2024-07-14 21:26:43.878192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:32.418 [2024-07-14 21:26:43.878203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.418 [2024-07-14 21:26:43.878213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.418 [2024-07-14 21:26:43.878256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.418 [2024-07-14 21:26:43.878270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:32.418 [2024-07-14 21:26:43.878281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.418 [2024-07-14 21:26:43.878291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.418 [2024-07-14 21:26:43.878406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.418 [2024-07-14 21:26:43.878422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:32.418 [2024-07-14 21:26:43.878433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.418 [2024-07-14 21:26:43.878443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.418 [2024-07-14 21:26:43.878591] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 425.923 ms, result 0 00:27:33.355 00:27:33.355 00:27:33.355 21:26:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:35.888 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:35.888 21:26:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:35.888 [2024-07-14 21:26:46.897286] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:35.888 [2024-07-14 21:26:46.897663] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84888 ] 00:27:35.888 [2024-07-14 21:26:47.055156] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.889 [2024-07-14 21:26:47.219876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.147 [2024-07-14 21:26:47.510877] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:36.147 [2024-07-14 21:26:47.511138] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:36.147 [2024-07-14 21:26:47.671605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.147 [2024-07-14 21:26:47.671849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:36.147 [2024-07-14 21:26:47.671879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:36.147 [2024-07-14 21:26:47.671892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.147 [2024-07-14 21:26:47.671973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.147 [2024-07-14 21:26:47.671993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:36.147 [2024-07-14 21:26:47.672005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:27:36.147 [2024-07-14 21:26:47.672019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.147 [2024-07-14 21:26:47.672050] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:36.147 [2024-07-14 21:26:47.673164] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:36.147 [2024-07-14 21:26:47.673205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.147 [2024-07-14 21:26:47.673223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:36.147 [2024-07-14 21:26:47.673234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.161 ms 00:27:36.147 [2024-07-14 21:26:47.673245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.147 [2024-07-14 21:26:47.674403] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:36.147 [2024-07-14 21:26:47.689749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.147 [2024-07-14 21:26:47.689848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:36.147 [2024-07-14 21:26:47.689883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.347 ms 00:27:36.147 [2024-07-14 21:26:47.689895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.147 [2024-07-14 21:26:47.689976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.147 [2024-07-14 21:26:47.689995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:36.147 [2024-07-14 21:26:47.690011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:36.147 [2024-07-14 21:26:47.690023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.407 [2024-07-14 21:26:47.694516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.407 [2024-07-14 21:26:47.694559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:36.407 [2024-07-14 21:26:47.694604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.403 ms 00:27:36.407 [2024-07-14 21:26:47.694615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.407 [2024-07-14 21:26:47.694716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.407 [2024-07-14 21:26:47.694737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:36.407 [2024-07-14 21:26:47.694749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:27:36.407 [2024-07-14 21:26:47.694765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.407 [2024-07-14 21:26:47.694905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.407 [2024-07-14 21:26:47.694924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:36.407 [2024-07-14 21:26:47.694936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:36.407 [2024-07-14 21:26:47.694963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.407 [2024-07-14 21:26:47.695026] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:36.407 [2024-07-14 21:26:47.698993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.407 [2024-07-14 21:26:47.699027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:36.407 [2024-07-14 21:26:47.699041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.976 ms 00:27:36.407 [2024-07-14 21:26:47.699051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.407 [2024-07-14 21:26:47.699098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.407 [2024-07-14 21:26:47.699112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:36.407 [2024-07-14 21:26:47.699124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:36.407 [2024-07-14 21:26:47.699133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.407 [2024-07-14 21:26:47.699171] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:36.407 [2024-07-14 21:26:47.699214] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:36.407 [2024-07-14 21:26:47.699252] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:36.407 [2024-07-14 21:26:47.699272] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:27:36.407 [2024-07-14 21:26:47.699363] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:36.407 [2024-07-14 21:26:47.699377] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:36.407 [2024-07-14 21:26:47.699390] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:36.407 [2024-07-14 21:26:47.699402] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:36.407 [2024-07-14 21:26:47.699413] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:36.407 [2024-07-14 21:26:47.699424] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:36.407 [2024-07-14 21:26:47.699434] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:36.407 [2024-07-14 21:26:47.699443] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:36.407 [2024-07-14 21:26:47.699453] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:36.407 [2024-07-14 21:26:47.699463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.407 [2024-07-14 21:26:47.699477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:36.407 [2024-07-14 21:26:47.699487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:27:36.407 [2024-07-14 21:26:47.699496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.407 [2024-07-14 21:26:47.699572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.407 [2024-07-14 21:26:47.699584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:36.407 [2024-07-14 21:26:47.699595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:27:36.407 [2024-07-14 21:26:47.699604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.407 [2024-07-14 21:26:47.699701] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:36.407 [2024-07-14 21:26:47.699716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:36.407 [2024-07-14 21:26:47.699731] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:36.408 [2024-07-14 21:26:47.699741] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:36.408 [2024-07-14 21:26:47.699751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:36.408 [2024-07-14 21:26:47.699760] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:36.408 [2024-07-14 21:26:47.699769] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:36.408 [2024-07-14 21:26:47.699778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:36.408 [2024-07-14 21:26:47.699788] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:36.408 [2024-07-14 21:26:47.699797] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:36.408 [2024-07-14 21:26:47.699805] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:36.408 [2024-07-14 21:26:47.699855] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:36.408 [2024-07-14 21:26:47.699866] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:36.408 [2024-07-14 21:26:47.699876] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:36.408 [2024-07-14 21:26:47.699886] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:36.408 [2024-07-14 21:26:47.699895] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:36.408 [2024-07-14 21:26:47.699904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:36.408 [2024-07-14 21:26:47.699913] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:36.408 [2024-07-14 21:26:47.699924] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:36.408 [2024-07-14 21:26:47.699934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:36.408 [2024-07-14 21:26:47.699955] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:36.408 [2024-07-14 21:26:47.699965] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:36.408 [2024-07-14 21:26:47.699974] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:36.408 [2024-07-14 21:26:47.699983] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:36.408 [2024-07-14 21:26:47.699992] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:36.408 [2024-07-14 21:26:47.700001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:36.408 [2024-07-14 21:26:47.700010] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:36.408 [2024-07-14 21:26:47.700019] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:36.408 [2024-07-14 21:26:47.700028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:36.408 [2024-07-14 21:26:47.700037] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:36.408 [2024-07-14 21:26:47.700046] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:36.408 [2024-07-14 21:26:47.700055] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:36.408 [2024-07-14 21:26:47.700064] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:36.408 [2024-07-14 21:26:47.700072] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:36.408 [2024-07-14 21:26:47.700097] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:36.408 [2024-07-14 21:26:47.700106] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:36.408 [2024-07-14 21:26:47.700115] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:36.408 [2024-07-14 21:26:47.700124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:36.408 [2024-07-14 21:26:47.700134] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:36.408 [2024-07-14 21:26:47.700142] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:36.408 [2024-07-14 21:26:47.700168] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:36.408 [2024-07-14 21:26:47.700177] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:36.408 [2024-07-14 21:26:47.700187] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:36.408 [2024-07-14 21:26:47.700196] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:36.408 [2024-07-14 21:26:47.700207] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:36.408 [2024-07-14 21:26:47.700217] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:36.408 [2024-07-14 21:26:47.700228] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:36.408 [2024-07-14 21:26:47.700253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:36.408 [2024-07-14 21:26:47.700262] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:36.408 [2024-07-14 21:26:47.700272] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:36.408 [2024-07-14 21:26:47.700283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:36.408 [2024-07-14 21:26:47.700292] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:36.408 [2024-07-14 21:26:47.700302] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:36.408 [2024-07-14 21:26:47.700313] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:36.408 [2024-07-14 21:26:47.700326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:36.408 [2024-07-14 21:26:47.700337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:36.408 [2024-07-14 21:26:47.700347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:36.408 [2024-07-14 21:26:47.700358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:36.408 [2024-07-14 21:26:47.700368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:36.408 [2024-07-14 21:26:47.700378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:36.408 [2024-07-14 21:26:47.700388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:36.408 [2024-07-14 21:26:47.700398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:36.408 [2024-07-14 21:26:47.700408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:36.408 [2024-07-14 21:26:47.700418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:36.408 [2024-07-14 21:26:47.700444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:36.408 [2024-07-14 21:26:47.700454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:36.408 [2024-07-14 21:26:47.700465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:36.408 [2024-07-14 21:26:47.700475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:36.408 [2024-07-14 21:26:47.700486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:36.408 [2024-07-14 21:26:47.700524] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:36.408 [2024-07-14 21:26:47.700537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:36.408 [2024-07-14 21:26:47.700549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:36.408 [2024-07-14 21:26:47.700561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:36.408 [2024-07-14 21:26:47.700572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:36.408 [2024-07-14 21:26:47.700583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:36.408 [2024-07-14 21:26:47.700595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.408 [2024-07-14 21:26:47.700612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:36.408 [2024-07-14 21:26:47.700624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:27:36.408 [2024-07-14 21:26:47.700635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.408 [2024-07-14 21:26:47.741617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.408 [2024-07-14 21:26:47.741681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:36.408 [2024-07-14 21:26:47.741700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.896 ms 00:27:36.408 [2024-07-14 21:26:47.741711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.408 [2024-07-14 21:26:47.741890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.408 [2024-07-14 21:26:47.741907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:36.408 [2024-07-14 21:26:47.741920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:27:36.408 [2024-07-14 21:26:47.741942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.408 [2024-07-14 21:26:47.778903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.408 [2024-07-14 21:26:47.778967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:36.408 [2024-07-14 21:26:47.778987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.851 ms 00:27:36.408 [2024-07-14 21:26:47.778999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.408 [2024-07-14 21:26:47.779069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.408 [2024-07-14 21:26:47.779085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:36.408 [2024-07-14 21:26:47.779098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:36.408 [2024-07-14 21:26:47.779109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.408 [2024-07-14 21:26:47.779533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.408 [2024-07-14 21:26:47.779559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:36.408 [2024-07-14 21:26:47.779574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:27:36.408 [2024-07-14 21:26:47.779585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.408 [2024-07-14 21:26:47.779805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.408 [2024-07-14 21:26:47.779839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:36.408 [2024-07-14 21:26:47.779852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:27:36.408 [2024-07-14 21:26:47.779863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.408 [2024-07-14 21:26:47.795785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.408 [2024-07-14 21:26:47.795851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:36.408 [2024-07-14 21:26:47.795886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.894 ms 00:27:36.408 [2024-07-14 21:26:47.795898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.408 [2024-07-14 21:26:47.811432] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:36.408 [2024-07-14 21:26:47.811473] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:36.408 [2024-07-14 21:26:47.811490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.408 [2024-07-14 21:26:47.811501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:36.409 [2024-07-14 21:26:47.811513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.458 ms 00:27:36.409 [2024-07-14 21:26:47.811523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.409 [2024-07-14 21:26:47.838940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.409 [2024-07-14 21:26:47.838979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:36.409 [2024-07-14 21:26:47.839010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.375 ms 00:27:36.409 [2024-07-14 21:26:47.839027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.409 [2024-07-14 21:26:47.853160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.409 [2024-07-14 21:26:47.853213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:36.409 [2024-07-14 21:26:47.853228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.088 ms 00:27:36.409 [2024-07-14 21:26:47.853238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.409 [2024-07-14 21:26:47.867140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.409 [2024-07-14 21:26:47.867177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:36.409 [2024-07-14 21:26:47.867193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.863 ms 00:27:36.409 [2024-07-14 21:26:47.867203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.409 [2024-07-14 21:26:47.868023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.409 [2024-07-14 21:26:47.868059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:36.409 [2024-07-14 21:26:47.868076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:27:36.409 [2024-07-14 21:26:47.868087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.409 [2024-07-14 21:26:47.933943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.409 [2024-07-14 21:26:47.934010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:36.409 [2024-07-14 21:26:47.934028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.830 ms 00:27:36.409 [2024-07-14 21:26:47.934038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.409 [2024-07-14 21:26:47.945412] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:36.409 [2024-07-14 21:26:47.947908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.409 [2024-07-14 21:26:47.947951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:36.409 [2024-07-14 21:26:47.947966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.784 ms 00:27:36.409 [2024-07-14 21:26:47.947977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.409 [2024-07-14 21:26:47.948070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.409 [2024-07-14 21:26:47.948089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:36.409 [2024-07-14 21:26:47.948116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:36.409 [2024-07-14 21:26:47.948126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.409 [2024-07-14 21:26:47.948804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.409 [2024-07-14 21:26:47.948870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:36.409 [2024-07-14 21:26:47.948904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.617 ms 00:27:36.409 [2024-07-14 21:26:47.948931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.409 [2024-07-14 21:26:47.948983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.409 [2024-07-14 21:26:47.948999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:36.409 [2024-07-14 21:26:47.949011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:36.409 [2024-07-14 21:26:47.949022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.409 [2024-07-14 21:26:47.949084] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:36.409 [2024-07-14 21:26:47.949115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.409 [2024-07-14 21:26:47.949125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:36.409 [2024-07-14 21:26:47.949140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:36.409 [2024-07-14 21:26:47.949150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.728 [2024-07-14 21:26:47.978190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.728 [2024-07-14 21:26:47.978228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:36.728 [2024-07-14 21:26:47.978244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.015 ms 00:27:36.728 [2024-07-14 21:26:47.978254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.728 [2024-07-14 21:26:47.978323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.728 [2024-07-14 21:26:47.978348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:36.728 [2024-07-14 21:26:47.978359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:36.728 [2024-07-14 21:26:47.978368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.728 [2024-07-14 21:26:47.979601] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 307.459 ms, result 0 00:28:17.067  Copying: 26/1024 [MB] (26 MBps) Copying: 50/1024 [MB] (24 MBps) Copying: 75/1024 [MB] (24 MBps) Copying: 100/1024 [MB] (24 MBps) Copying: 124/1024 [MB] (24 MBps) Copying: 149/1024 [MB] (24 MBps) Copying: 173/1024 [MB] (24 MBps) Copying: 197/1024 [MB] (23 MBps) Copying: 221/1024 [MB] (23 MBps) Copying: 245/1024 [MB] (24 MBps) Copying: 271/1024 [MB] (25 MBps) Copying: 297/1024 [MB] (25 MBps) Copying: 323/1024 [MB] (26 MBps) Copying: 349/1024 [MB] (26 MBps) Copying: 374/1024 [MB] (24 MBps) Copying: 398/1024 [MB] (24 MBps) Copying: 422/1024 [MB] (23 MBps) Copying: 449/1024 [MB] (27 MBps) Copying: 475/1024 [MB] (26 MBps) Copying: 503/1024 [MB] (27 MBps) Copying: 530/1024 [MB] (27 MBps) Copying: 558/1024 [MB] (27 MBps) Copying: 585/1024 [MB] (27 MBps) Copying: 611/1024 [MB] (25 MBps) Copying: 638/1024 [MB] (26 MBps) Copying: 664/1024 [MB] (26 MBps) Copying: 691/1024 [MB] (27 MBps) Copying: 716/1024 [MB] (24 MBps) Copying: 740/1024 [MB] (23 MBps) Copying: 764/1024 [MB] (24 MBps) Copying: 790/1024 [MB] (26 MBps) Copying: 816/1024 [MB] (26 MBps) Copying: 843/1024 [MB] (26 MBps) Copying: 869/1024 [MB] (25 MBps) Copying: 894/1024 [MB] (25 MBps) Copying: 918/1024 [MB] (23 MBps) Copying: 941/1024 [MB] (23 MBps) Copying: 965/1024 [MB] (24 MBps) Copying: 990/1024 [MB] (24 MBps) Copying: 1015/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-14 21:27:28.529235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.067 [2024-07-14 21:27:28.529312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:17.067 [2024-07-14 21:27:28.529350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:17.067 [2024-07-14 21:27:28.529362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.067 [2024-07-14 21:27:28.529392] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:17.067 [2024-07-14 21:27:28.533560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.067 [2024-07-14 21:27:28.533613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:17.067 [2024-07-14 21:27:28.533644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.145 ms 00:28:17.068 [2024-07-14 21:27:28.533655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.068 [2024-07-14 21:27:28.533958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.068 [2024-07-14 21:27:28.533987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:17.068 [2024-07-14 21:27:28.534001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:28:17.068 [2024-07-14 21:27:28.534013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.068 [2024-07-14 21:27:28.537578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.068 [2024-07-14 21:27:28.537610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:17.068 [2024-07-14 21:27:28.537639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.545 ms 00:28:17.068 [2024-07-14 21:27:28.537649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.068 [2024-07-14 21:27:28.543889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.068 [2024-07-14 21:27:28.543924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:17.068 [2024-07-14 21:27:28.543960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.218 ms 00:28:17.068 [2024-07-14 21:27:28.543971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.068 [2024-07-14 21:27:28.576567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.068 [2024-07-14 21:27:28.576619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:17.068 [2024-07-14 21:27:28.576639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.520 ms 00:28:17.068 [2024-07-14 21:27:28.576651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.068 [2024-07-14 21:27:28.594807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.068 [2024-07-14 21:27:28.594883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:17.068 [2024-07-14 21:27:28.594904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.101 ms 00:28:17.068 [2024-07-14 21:27:28.594916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.068 [2024-07-14 21:27:28.598130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.068 [2024-07-14 21:27:28.598177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:17.068 [2024-07-14 21:27:28.598195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.158 ms 00:28:17.068 [2024-07-14 21:27:28.598244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.327 [2024-07-14 21:27:28.629567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.327 [2024-07-14 21:27:28.629609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:28:17.327 [2024-07-14 21:27:28.629642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.301 ms 00:28:17.327 [2024-07-14 21:27:28.629652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.327 [2024-07-14 21:27:28.659370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.327 [2024-07-14 21:27:28.659425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:28:17.327 [2024-07-14 21:27:28.659457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.659 ms 00:28:17.327 [2024-07-14 21:27:28.659468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.327 [2024-07-14 21:27:28.689335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.327 [2024-07-14 21:27:28.689377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:17.327 [2024-07-14 21:27:28.689407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.825 ms 00:28:17.327 [2024-07-14 21:27:28.689418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.328 [2024-07-14 21:27:28.720721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.328 [2024-07-14 21:27:28.720791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:17.328 [2024-07-14 21:27:28.720828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.215 ms 00:28:17.328 [2024-07-14 21:27:28.720840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.328 [2024-07-14 21:27:28.720887] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:17.328 [2024-07-14 21:27:28.720912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:17.328 [2024-07-14 21:27:28.720926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:28:17.328 [2024-07-14 21:27:28.720940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.720953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.720965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.720977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.720989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.721981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.722003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.722017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.722029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.722040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.722052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.722067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.722088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.722110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.722138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.722157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.722169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.722184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.722205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:17.328 [2024-07-14 21:27:28.722224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:17.329 [2024-07-14 21:27:28.722236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:17.329 [2024-07-14 21:27:28.722248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:17.329 [2024-07-14 21:27:28.722260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:17.329 [2024-07-14 21:27:28.722272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:17.329 [2024-07-14 21:27:28.722284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:17.329 [2024-07-14 21:27:28.722298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:17.329 [2024-07-14 21:27:28.722319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:17.329 [2024-07-14 21:27:28.722339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:17.329 [2024-07-14 21:27:28.722352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:17.329 [2024-07-14 21:27:28.722365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:17.329 [2024-07-14 21:27:28.722376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:17.329 [2024-07-14 21:27:28.722388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:17.329 [2024-07-14 21:27:28.722405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:17.329 [2024-07-14 21:27:28.722427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:17.329 [2024-07-14 21:27:28.722453] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:17.329 [2024-07-14 21:27:28.722465] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 86b64d2f-1cec-4ba9-86bd-e5a50c3d64ab 00:28:17.329 [2024-07-14 21:27:28.722477] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:28:17.329 [2024-07-14 21:27:28.722489] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:17.329 [2024-07-14 21:27:28.722507] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:17.329 [2024-07-14 21:27:28.722525] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:17.329 [2024-07-14 21:27:28.722545] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:17.329 [2024-07-14 21:27:28.722566] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:17.329 [2024-07-14 21:27:28.722579] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:17.329 [2024-07-14 21:27:28.722589] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:17.329 [2024-07-14 21:27:28.722599] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:17.329 [2024-07-14 21:27:28.722611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.329 [2024-07-14 21:27:28.722622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:17.329 [2024-07-14 21:27:28.722635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.725 ms 00:28:17.329 [2024-07-14 21:27:28.722646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.329 [2024-07-14 21:27:28.739074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.329 [2024-07-14 21:27:28.739114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:17.329 [2024-07-14 21:27:28.739174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.361 ms 00:28:17.329 [2024-07-14 21:27:28.739185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.329 [2024-07-14 21:27:28.739603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.329 [2024-07-14 21:27:28.739623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:17.329 [2024-07-14 21:27:28.739636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:28:17.329 [2024-07-14 21:27:28.739646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.329 [2024-07-14 21:27:28.775771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.329 [2024-07-14 21:27:28.775847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:17.329 [2024-07-14 21:27:28.775892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.329 [2024-07-14 21:27:28.775903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.329 [2024-07-14 21:27:28.775994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.329 [2024-07-14 21:27:28.776009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:17.329 [2024-07-14 21:27:28.776021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.329 [2024-07-14 21:27:28.776032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.329 [2024-07-14 21:27:28.776143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.329 [2024-07-14 21:27:28.776169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:17.329 [2024-07-14 21:27:28.776193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.329 [2024-07-14 21:27:28.776213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.329 [2024-07-14 21:27:28.776253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.329 [2024-07-14 21:27:28.776276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:17.329 [2024-07-14 21:27:28.776289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.329 [2024-07-14 21:27:28.776300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.587 [2024-07-14 21:27:28.872619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.587 [2024-07-14 21:27:28.872686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:17.587 [2024-07-14 21:27:28.872706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.587 [2024-07-14 21:27:28.872718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.587 [2024-07-14 21:27:28.954074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.587 [2024-07-14 21:27:28.954138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:17.587 [2024-07-14 21:27:28.954188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.587 [2024-07-14 21:27:28.954199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.587 [2024-07-14 21:27:28.954276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.587 [2024-07-14 21:27:28.954300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:17.587 [2024-07-14 21:27:28.954311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.587 [2024-07-14 21:27:28.954321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.587 [2024-07-14 21:27:28.954363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.587 [2024-07-14 21:27:28.954378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:17.587 [2024-07-14 21:27:28.954388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.587 [2024-07-14 21:27:28.954398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.587 [2024-07-14 21:27:28.954506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.587 [2024-07-14 21:27:28.954529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:17.587 [2024-07-14 21:27:28.954540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.587 [2024-07-14 21:27:28.954550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.587 [2024-07-14 21:27:28.954615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.587 [2024-07-14 21:27:28.954632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:17.587 [2024-07-14 21:27:28.954644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.587 [2024-07-14 21:27:28.954654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.587 [2024-07-14 21:27:28.954694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.587 [2024-07-14 21:27:28.954708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:17.587 [2024-07-14 21:27:28.954725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.587 [2024-07-14 21:27:28.954736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.587 [2024-07-14 21:27:28.954801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.587 [2024-07-14 21:27:28.954877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:17.587 [2024-07-14 21:27:28.954892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.587 [2024-07-14 21:27:28.954903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.587 [2024-07-14 21:27:28.955050] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 425.802 ms, result 0 00:28:18.522 00:28:18.522 00:28:18.522 21:27:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:21.057 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:28:21.057 21:27:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:28:21.057 21:27:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:28:21.057 21:27:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:21.057 21:27:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:21.057 21:27:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:21.057 21:27:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:21.057 21:27:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:21.057 Process with pid 82982 is not found 00:28:21.057 21:27:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 82982 00:28:21.057 21:27:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@948 -- # '[' -z 82982 ']' 00:28:21.057 21:27:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # kill -0 82982 00:28:21.057 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (82982) - No such process 00:28:21.057 21:27:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@975 -- # echo 'Process with pid 82982 is not found' 00:28:21.057 21:27:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:28:21.316 Remove shared memory files 00:28:21.316 21:27:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:28:21.316 21:27:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:21.316 21:27:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:21.316 21:27:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:21.316 21:27:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:28:21.316 21:27:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:21.316 21:27:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:21.316 ************************************ 00:28:21.316 END TEST ftl_dirty_shutdown 00:28:21.316 ************************************ 00:28:21.316 00:28:21.316 real 3m52.317s 00:28:21.316 user 4m30.899s 00:28:21.316 sys 0m35.565s 00:28:21.316 21:27:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:21.316 21:27:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:21.316 21:27:32 ftl -- common/autotest_common.sh@1142 -- # return 0 00:28:21.316 21:27:32 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:21.316 21:27:32 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:28:21.316 21:27:32 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:21.316 21:27:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:21.316 ************************************ 00:28:21.316 START TEST ftl_upgrade_shutdown 00:28:21.316 ************************************ 00:28:21.316 21:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:21.316 * Looking for test storage... 00:28:21.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:21.316 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:21.316 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:28:21.316 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:21.316 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:21.316 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:21.316 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:21.316 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:21.316 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:21.316 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:21.316 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:21.316 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:21.316 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:21.316 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:21.316 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:21.316 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:21.316 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:21.316 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:21.316 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:21.316 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:21.316 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:21.316 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:21.316 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:21.316 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85399 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85399 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 85399 ']' 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:21.317 21:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:21.575 [2024-07-14 21:27:32.951631] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:21.575 [2024-07-14 21:27:32.952022] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85399 ] 00:28:21.832 [2024-07-14 21:27:33.126000] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.832 [2024-07-14 21:27:33.353194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.763 21:27:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:22.763 21:27:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:28:22.763 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:22.763 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:28:22.763 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:28:22.763 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:22.763 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:28:22.763 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:22.763 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:28:22.763 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:22.763 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:28:22.763 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:22.763 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:28:22.763 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:22.763 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:28:22.763 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:22.763 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:28:22.763 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:28:22.763 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:28:22.763 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:22.763 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:28:22.763 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:22.763 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:28:23.022 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:28:23.022 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:23.022 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:28:23.022 21:27:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:28:23.022 21:27:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:23.022 21:27:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:28:23.022 21:27:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:28:23.022 21:27:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:28:23.022 21:27:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:23.022 { 00:28:23.022 "name": "basen1", 00:28:23.022 "aliases": [ 00:28:23.022 "ca67597d-62ea-4b16-8b64-c0b131f77c42" 00:28:23.022 ], 00:28:23.022 "product_name": "NVMe disk", 00:28:23.022 "block_size": 4096, 00:28:23.022 "num_blocks": 1310720, 00:28:23.022 "uuid": "ca67597d-62ea-4b16-8b64-c0b131f77c42", 00:28:23.022 "assigned_rate_limits": { 00:28:23.022 "rw_ios_per_sec": 0, 00:28:23.022 "rw_mbytes_per_sec": 0, 00:28:23.022 "r_mbytes_per_sec": 0, 00:28:23.022 "w_mbytes_per_sec": 0 00:28:23.022 }, 00:28:23.022 "claimed": true, 00:28:23.022 "claim_type": "read_many_write_one", 00:28:23.022 "zoned": false, 00:28:23.022 "supported_io_types": { 00:28:23.022 "read": true, 00:28:23.022 "write": true, 00:28:23.022 "unmap": true, 00:28:23.022 "flush": true, 00:28:23.022 "reset": true, 00:28:23.022 "nvme_admin": true, 00:28:23.022 "nvme_io": true, 00:28:23.022 "nvme_io_md": false, 00:28:23.022 "write_zeroes": true, 00:28:23.022 "zcopy": false, 00:28:23.022 "get_zone_info": false, 00:28:23.022 "zone_management": false, 00:28:23.022 "zone_append": false, 00:28:23.022 "compare": true, 00:28:23.022 "compare_and_write": false, 00:28:23.022 "abort": true, 00:28:23.022 "seek_hole": false, 00:28:23.022 "seek_data": false, 00:28:23.022 "copy": true, 00:28:23.022 "nvme_iov_md": false 00:28:23.022 }, 00:28:23.022 "driver_specific": { 00:28:23.022 "nvme": [ 00:28:23.022 { 00:28:23.022 "pci_address": "0000:00:11.0", 00:28:23.022 "trid": { 00:28:23.022 "trtype": "PCIe", 00:28:23.022 "traddr": "0000:00:11.0" 00:28:23.022 }, 00:28:23.022 "ctrlr_data": { 00:28:23.022 "cntlid": 0, 00:28:23.022 "vendor_id": "0x1b36", 00:28:23.022 "model_number": "QEMU NVMe Ctrl", 00:28:23.022 "serial_number": "12341", 00:28:23.022 "firmware_revision": "8.0.0", 00:28:23.022 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:23.022 "oacs": { 00:28:23.022 "security": 0, 00:28:23.022 "format": 1, 00:28:23.022 "firmware": 0, 00:28:23.022 "ns_manage": 1 00:28:23.022 }, 00:28:23.022 "multi_ctrlr": false, 00:28:23.022 "ana_reporting": false 00:28:23.022 }, 00:28:23.022 "vs": { 00:28:23.022 "nvme_version": "1.4" 00:28:23.022 }, 00:28:23.022 "ns_data": { 00:28:23.022 "id": 1, 00:28:23.022 "can_share": false 00:28:23.022 } 00:28:23.022 } 00:28:23.022 ], 00:28:23.022 "mp_policy": "active_passive" 00:28:23.022 } 00:28:23.022 } 00:28:23.022 ]' 00:28:23.022 21:27:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:23.281 21:27:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:28:23.281 21:27:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:23.281 21:27:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:28:23.281 21:27:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:28:23.281 21:27:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:28:23.281 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:23.281 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:28:23.281 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:23.281 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:23.281 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:23.539 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=ff2fbde4-1680-4c93-ab20-762a4c4be704 00:28:23.539 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:23.539 21:27:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ff2fbde4-1680-4c93-ab20-762a4c4be704 00:28:23.798 21:27:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:28:24.056 21:27:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=0a3dbff0-eb6a-4581-be15-750e74447b8c 00:28:24.056 21:27:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 0a3dbff0-eb6a-4581-be15-750e74447b8c 00:28:24.314 21:27:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=5b4d111c-53a6-46ab-8a34-67a33f10aeba 00:28:24.314 21:27:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 5b4d111c-53a6-46ab-8a34-67a33f10aeba ]] 00:28:24.314 21:27:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 5b4d111c-53a6-46ab-8a34-67a33f10aeba 5120 00:28:24.314 21:27:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:28:24.314 21:27:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:24.314 21:27:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=5b4d111c-53a6-46ab-8a34-67a33f10aeba 00:28:24.314 21:27:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:28:24.314 21:27:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 5b4d111c-53a6-46ab-8a34-67a33f10aeba 00:28:24.314 21:27:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=5b4d111c-53a6-46ab-8a34-67a33f10aeba 00:28:24.314 21:27:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:24.314 21:27:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:28:24.314 21:27:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:28:24.314 21:27:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5b4d111c-53a6-46ab-8a34-67a33f10aeba 00:28:24.572 21:27:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:24.572 { 00:28:24.572 "name": "5b4d111c-53a6-46ab-8a34-67a33f10aeba", 00:28:24.572 "aliases": [ 00:28:24.572 "lvs/basen1p0" 00:28:24.572 ], 00:28:24.572 "product_name": "Logical Volume", 00:28:24.572 "block_size": 4096, 00:28:24.572 "num_blocks": 5242880, 00:28:24.572 "uuid": "5b4d111c-53a6-46ab-8a34-67a33f10aeba", 00:28:24.572 "assigned_rate_limits": { 00:28:24.572 "rw_ios_per_sec": 0, 00:28:24.572 "rw_mbytes_per_sec": 0, 00:28:24.572 "r_mbytes_per_sec": 0, 00:28:24.572 "w_mbytes_per_sec": 0 00:28:24.572 }, 00:28:24.572 "claimed": false, 00:28:24.572 "zoned": false, 00:28:24.572 "supported_io_types": { 00:28:24.572 "read": true, 00:28:24.572 "write": true, 00:28:24.572 "unmap": true, 00:28:24.572 "flush": false, 00:28:24.572 "reset": true, 00:28:24.572 "nvme_admin": false, 00:28:24.572 "nvme_io": false, 00:28:24.572 "nvme_io_md": false, 00:28:24.572 "write_zeroes": true, 00:28:24.572 "zcopy": false, 00:28:24.572 "get_zone_info": false, 00:28:24.573 "zone_management": false, 00:28:24.573 "zone_append": false, 00:28:24.573 "compare": false, 00:28:24.573 "compare_and_write": false, 00:28:24.573 "abort": false, 00:28:24.573 "seek_hole": true, 00:28:24.573 "seek_data": true, 00:28:24.573 "copy": false, 00:28:24.573 "nvme_iov_md": false 00:28:24.573 }, 00:28:24.573 "driver_specific": { 00:28:24.573 "lvol": { 00:28:24.573 "lvol_store_uuid": "0a3dbff0-eb6a-4581-be15-750e74447b8c", 00:28:24.573 "base_bdev": "basen1", 00:28:24.573 "thin_provision": true, 00:28:24.573 "num_allocated_clusters": 0, 00:28:24.573 "snapshot": false, 00:28:24.573 "clone": false, 00:28:24.573 "esnap_clone": false 00:28:24.573 } 00:28:24.573 } 00:28:24.573 } 00:28:24.573 ]' 00:28:24.573 21:27:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:24.573 21:27:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:28:24.573 21:27:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:24.573 21:27:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:28:24.573 21:27:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:28:24.573 21:27:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:28:24.573 21:27:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:28:24.573 21:27:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:24.573 21:27:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:28:24.831 21:27:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:28:24.831 21:27:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:28:24.831 21:27:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:28:25.088 21:27:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:28:25.088 21:27:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:28:25.088 21:27:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 5b4d111c-53a6-46ab-8a34-67a33f10aeba -c cachen1p0 --l2p_dram_limit 2 00:28:25.346 [2024-07-14 21:27:36.731162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.346 [2024-07-14 21:27:36.731259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:25.346 [2024-07-14 21:27:36.731282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:25.346 [2024-07-14 21:27:36.731296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.346 [2024-07-14 21:27:36.731389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.346 [2024-07-14 21:27:36.731412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:25.346 [2024-07-14 21:27:36.731426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:28:25.346 [2024-07-14 21:27:36.731447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.346 [2024-07-14 21:27:36.731488] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:25.346 [2024-07-14 21:27:36.732501] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:25.346 [2024-07-14 21:27:36.732577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.346 [2024-07-14 21:27:36.732604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:25.346 [2024-07-14 21:27:36.732619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.096 ms 00:28:25.346 [2024-07-14 21:27:36.732633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.346 [2024-07-14 21:27:36.732754] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 6f306d99-2f3d-4bb6-87a8-3468ef3a06e7 00:28:25.346 [2024-07-14 21:27:36.733979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.346 [2024-07-14 21:27:36.734142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:28:25.347 [2024-07-14 21:27:36.734289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:28:25.347 [2024-07-14 21:27:36.734314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.347 [2024-07-14 21:27:36.739115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.347 [2024-07-14 21:27:36.739166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:25.347 [2024-07-14 21:27:36.739192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.734 ms 00:28:25.347 [2024-07-14 21:27:36.739204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.347 [2024-07-14 21:27:36.739275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.347 [2024-07-14 21:27:36.739310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:25.347 [2024-07-14 21:27:36.739325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:28:25.347 [2024-07-14 21:27:36.739336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.347 [2024-07-14 21:27:36.739434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.347 [2024-07-14 21:27:36.739453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:25.347 [2024-07-14 21:27:36.739467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:25.347 [2024-07-14 21:27:36.739481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.347 [2024-07-14 21:27:36.739517] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:25.347 [2024-07-14 21:27:36.744534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.347 [2024-07-14 21:27:36.744603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:25.347 [2024-07-14 21:27:36.744621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.029 ms 00:28:25.347 [2024-07-14 21:27:36.744635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.347 [2024-07-14 21:27:36.744676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.347 [2024-07-14 21:27:36.744695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:25.347 [2024-07-14 21:27:36.744708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:25.347 [2024-07-14 21:27:36.744722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.347 [2024-07-14 21:27:36.744765] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:28:25.347 [2024-07-14 21:27:36.744962] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:25.347 [2024-07-14 21:27:36.744986] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:25.347 [2024-07-14 21:27:36.745007] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:28:25.347 [2024-07-14 21:27:36.745023] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:25.347 [2024-07-14 21:27:36.745039] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:25.347 [2024-07-14 21:27:36.745053] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:25.347 [2024-07-14 21:27:36.745066] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:25.347 [2024-07-14 21:27:36.745082] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:25.347 [2024-07-14 21:27:36.745094] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:25.347 [2024-07-14 21:27:36.745107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.347 [2024-07-14 21:27:36.745120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:25.347 [2024-07-14 21:27:36.745133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.345 ms 00:28:25.347 [2024-07-14 21:27:36.745155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.347 [2024-07-14 21:27:36.745250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.347 [2024-07-14 21:27:36.745268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:25.347 [2024-07-14 21:27:36.745280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:28:25.347 [2024-07-14 21:27:36.745294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.347 [2024-07-14 21:27:36.745405] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:25.347 [2024-07-14 21:27:36.745434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:25.347 [2024-07-14 21:27:36.745448] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:25.347 [2024-07-14 21:27:36.745462] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:25.347 [2024-07-14 21:27:36.745475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:25.347 [2024-07-14 21:27:36.745487] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:25.347 [2024-07-14 21:27:36.745512] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:25.347 [2024-07-14 21:27:36.745526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:25.347 [2024-07-14 21:27:36.745537] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:25.347 [2024-07-14 21:27:36.745550] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:25.347 [2024-07-14 21:27:36.745561] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:25.347 [2024-07-14 21:27:36.745576] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:25.347 [2024-07-14 21:27:36.745587] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:25.347 [2024-07-14 21:27:36.745600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:25.347 [2024-07-14 21:27:36.745612] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:25.347 [2024-07-14 21:27:36.745624] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:25.347 [2024-07-14 21:27:36.745635] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:25.347 [2024-07-14 21:27:36.745650] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:25.347 [2024-07-14 21:27:36.745663] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:25.347 [2024-07-14 21:27:36.745677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:25.347 [2024-07-14 21:27:36.745688] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:25.347 [2024-07-14 21:27:36.745701] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:25.347 [2024-07-14 21:27:36.745712] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:25.347 [2024-07-14 21:27:36.745725] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:25.347 [2024-07-14 21:27:36.745736] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:25.347 [2024-07-14 21:27:36.745748] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:25.347 [2024-07-14 21:27:36.745759] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:25.347 [2024-07-14 21:27:36.745772] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:25.347 [2024-07-14 21:27:36.745782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:25.347 [2024-07-14 21:27:36.745809] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:25.347 [2024-07-14 21:27:36.745825] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:25.347 [2024-07-14 21:27:36.745839] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:25.347 [2024-07-14 21:27:36.745850] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:25.347 [2024-07-14 21:27:36.745865] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:25.347 [2024-07-14 21:27:36.745876] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:25.347 [2024-07-14 21:27:36.745889] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:25.347 [2024-07-14 21:27:36.745899] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:25.347 [2024-07-14 21:27:36.745915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:25.347 [2024-07-14 21:27:36.745926] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:25.347 [2024-07-14 21:27:36.745939] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:25.347 [2024-07-14 21:27:36.745950] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:25.347 [2024-07-14 21:27:36.745962] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:25.347 [2024-07-14 21:27:36.745973] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:25.347 [2024-07-14 21:27:36.745985] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:25.347 [2024-07-14 21:27:36.745997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:25.347 [2024-07-14 21:27:36.746011] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:25.347 [2024-07-14 21:27:36.746022] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:25.347 [2024-07-14 21:27:36.746036] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:25.347 [2024-07-14 21:27:36.746047] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:25.347 [2024-07-14 21:27:36.746061] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:25.347 [2024-07-14 21:27:36.746073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:25.347 [2024-07-14 21:27:36.746085] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:25.347 [2024-07-14 21:27:36.746096] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:25.347 [2024-07-14 21:27:36.746113] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:25.347 [2024-07-14 21:27:36.746127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:25.347 [2024-07-14 21:27:36.746145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:25.347 [2024-07-14 21:27:36.746157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:25.347 [2024-07-14 21:27:36.746171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:25.347 [2024-07-14 21:27:36.746183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:25.347 [2024-07-14 21:27:36.746196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:25.347 [2024-07-14 21:27:36.746208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:25.347 [2024-07-14 21:27:36.746223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:25.347 [2024-07-14 21:27:36.746235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:25.347 [2024-07-14 21:27:36.746248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:25.347 [2024-07-14 21:27:36.746261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:25.347 [2024-07-14 21:27:36.746285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:25.347 [2024-07-14 21:27:36.746297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:25.347 [2024-07-14 21:27:36.746311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:25.348 [2024-07-14 21:27:36.746323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:25.348 [2024-07-14 21:27:36.746336] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:25.348 [2024-07-14 21:27:36.746349] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:25.348 [2024-07-14 21:27:36.746364] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:25.348 [2024-07-14 21:27:36.746375] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:25.348 [2024-07-14 21:27:36.746389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:25.348 [2024-07-14 21:27:36.746401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:25.348 [2024-07-14 21:27:36.746415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.348 [2024-07-14 21:27:36.746428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:25.348 [2024-07-14 21:27:36.746442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.074 ms 00:28:25.348 [2024-07-14 21:27:36.746453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.348 [2024-07-14 21:27:36.746513] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:25.348 [2024-07-14 21:27:36.746531] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:27.875 [2024-07-14 21:27:38.798833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.875 [2024-07-14 21:27:38.798920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:27.875 [2024-07-14 21:27:38.798948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2052.329 ms 00:28:27.875 [2024-07-14 21:27:38.798962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.875 [2024-07-14 21:27:38.831566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.875 [2024-07-14 21:27:38.831636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:27.875 [2024-07-14 21:27:38.831659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.337 ms 00:28:27.875 [2024-07-14 21:27:38.831672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.875 [2024-07-14 21:27:38.831807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.875 [2024-07-14 21:27:38.831862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:27.875 [2024-07-14 21:27:38.831879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:27.875 [2024-07-14 21:27:38.831894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.875 [2024-07-14 21:27:38.871035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.875 [2024-07-14 21:27:38.871096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:27.875 [2024-07-14 21:27:38.871118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.081 ms 00:28:27.875 [2024-07-14 21:27:38.871131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.875 [2024-07-14 21:27:38.871258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.875 [2024-07-14 21:27:38.871274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:27.875 [2024-07-14 21:27:38.871288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:27.875 [2024-07-14 21:27:38.871298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.875 [2024-07-14 21:27:38.871636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.875 [2024-07-14 21:27:38.871653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:27.875 [2024-07-14 21:27:38.871666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.261 ms 00:28:27.875 [2024-07-14 21:27:38.871676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.875 [2024-07-14 21:27:38.871731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.875 [2024-07-14 21:27:38.871747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:27.875 [2024-07-14 21:27:38.871762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:28:27.875 [2024-07-14 21:27:38.871772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.875 [2024-07-14 21:27:38.890078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.875 [2024-07-14 21:27:38.890149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:27.875 [2024-07-14 21:27:38.890213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.263 ms 00:28:27.875 [2024-07-14 21:27:38.890224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.875 [2024-07-14 21:27:38.904223] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:27.875 [2024-07-14 21:27:38.905228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.875 [2024-07-14 21:27:38.905296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:27.875 [2024-07-14 21:27:38.905313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.896 ms 00:28:27.875 [2024-07-14 21:27:38.905326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.875 [2024-07-14 21:27:38.938964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.875 [2024-07-14 21:27:38.939028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:28:27.875 [2024-07-14 21:27:38.939049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.603 ms 00:28:27.875 [2024-07-14 21:27:38.939064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.875 [2024-07-14 21:27:38.939193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.875 [2024-07-14 21:27:38.939220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:27.875 [2024-07-14 21:27:38.939235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:28:27.875 [2024-07-14 21:27:38.939252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.875 [2024-07-14 21:27:38.969575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.875 [2024-07-14 21:27:38.969637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:28:27.875 [2024-07-14 21:27:38.969655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.256 ms 00:28:27.875 [2024-07-14 21:27:38.969669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.875 [2024-07-14 21:27:39.001246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.875 [2024-07-14 21:27:39.001301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:28:27.875 [2024-07-14 21:27:39.001321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.530 ms 00:28:27.875 [2024-07-14 21:27:39.001336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.875 [2024-07-14 21:27:39.002085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.875 [2024-07-14 21:27:39.002125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:27.875 [2024-07-14 21:27:39.002142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.698 ms 00:28:27.875 [2024-07-14 21:27:39.002160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.875 [2024-07-14 21:27:39.092186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.875 [2024-07-14 21:27:39.092287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:28:27.875 [2024-07-14 21:27:39.092309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 89.958 ms 00:28:27.875 [2024-07-14 21:27:39.092327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.875 [2024-07-14 21:27:39.126429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.875 [2024-07-14 21:27:39.126497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:28:27.875 [2024-07-14 21:27:39.126518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.031 ms 00:28:27.875 [2024-07-14 21:27:39.126533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.875 [2024-07-14 21:27:39.158433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.875 [2024-07-14 21:27:39.158508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:28:27.875 [2024-07-14 21:27:39.158541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.846 ms 00:28:27.875 [2024-07-14 21:27:39.158556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.875 [2024-07-14 21:27:39.190708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.875 [2024-07-14 21:27:39.190776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:27.875 [2024-07-14 21:27:39.190796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.096 ms 00:28:27.875 [2024-07-14 21:27:39.190832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.875 [2024-07-14 21:27:39.190898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.875 [2024-07-14 21:27:39.190920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:27.875 [2024-07-14 21:27:39.190934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:27.875 [2024-07-14 21:27:39.190950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.875 [2024-07-14 21:27:39.191065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.875 [2024-07-14 21:27:39.191089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:27.875 [2024-07-14 21:27:39.191106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:28:27.875 [2024-07-14 21:27:39.191119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.875 [2024-07-14 21:27:39.192180] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2460.523 ms, result 0 00:28:27.875 { 00:28:27.875 "name": "ftl", 00:28:27.875 "uuid": "6f306d99-2f3d-4bb6-87a8-3468ef3a06e7" 00:28:27.875 } 00:28:27.875 21:27:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:28:28.133 [2024-07-14 21:27:39.479463] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:28.133 21:27:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:28:28.391 21:27:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:28:28.649 [2024-07-14 21:27:39.980066] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:28.649 21:27:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:28:28.908 [2024-07-14 21:27:40.253625] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:28.908 21:27:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:29.166 Fill FTL, iteration 1 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=85510 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 85510 /var/tmp/spdk.tgt.sock 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 85510 ']' 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:28:29.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:29.166 21:27:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:29.424 [2024-07-14 21:27:40.718164] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:29.424 [2024-07-14 21:27:40.718343] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85510 ] 00:28:29.424 [2024-07-14 21:27:40.890533] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.682 [2024-07-14 21:27:41.097315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.248 21:27:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:30.248 21:27:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:28:30.248 21:27:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:28:30.815 ftln1 00:28:30.815 21:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:28:30.815 21:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:28:30.815 21:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:28:30.815 21:27:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 85510 00:28:30.815 21:27:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 85510 ']' 00:28:30.815 21:27:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 85510 00:28:31.073 21:27:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:28:31.073 21:27:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:31.073 21:27:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85510 00:28:31.073 killing process with pid 85510 00:28:31.073 21:27:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:31.073 21:27:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:31.073 21:27:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85510' 00:28:31.073 21:27:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 85510 00:28:31.073 21:27:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 85510 00:28:32.974 21:27:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:28:32.974 21:27:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:32.974 [2024-07-14 21:27:44.467291] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:32.974 [2024-07-14 21:27:44.467450] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85559 ] 00:28:33.232 [2024-07-14 21:27:44.631808] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.502 [2024-07-14 21:27:44.810171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.021  Copying: 204/1024 [MB] (204 MBps) Copying: 410/1024 [MB] (206 MBps) Copying: 618/1024 [MB] (208 MBps) Copying: 826/1024 [MB] (208 MBps) Copying: 1024/1024 [MB] (average 206 MBps) 00:28:40.021 00:28:40.021 Calculate MD5 checksum, iteration 1 00:28:40.021 21:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:28:40.021 21:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:28:40.021 21:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:40.021 21:27:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:40.021 21:27:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:40.021 21:27:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:40.021 21:27:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:40.021 21:27:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:40.021 [2024-07-14 21:27:51.423554] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:40.021 [2024-07-14 21:27:51.423742] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85633 ] 00:28:40.279 [2024-07-14 21:27:51.597386] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.279 [2024-07-14 21:27:51.780777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.967  Copying: 476/1024 [MB] (476 MBps) Copying: 947/1024 [MB] (471 MBps) Copying: 1024/1024 [MB] (average 472 MBps) 00:28:43.967 00:28:43.967 21:27:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:28:43.967 21:27:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:46.503 21:27:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:46.503 Fill FTL, iteration 2 00:28:46.503 21:27:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=0bca3a435c84acf71cc8f53676caa40c 00:28:46.503 21:27:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:46.503 21:27:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:46.503 21:27:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:28:46.503 21:27:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:46.503 21:27:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:46.503 21:27:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:46.503 21:27:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:46.503 21:27:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:46.503 21:27:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:46.503 [2024-07-14 21:27:57.738966] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:46.503 [2024-07-14 21:27:57.739099] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85697 ] 00:28:46.503 [2024-07-14 21:27:57.897249] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.761 [2024-07-14 21:27:58.114790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.700  Copying: 195/1024 [MB] (195 MBps) Copying: 389/1024 [MB] (194 MBps) Copying: 584/1024 [MB] (195 MBps) Copying: 781/1024 [MB] (197 MBps) Copying: 972/1024 [MB] (191 MBps) Copying: 1024/1024 [MB] (average 193 MBps) 00:28:53.700 00:28:53.700 Calculate MD5 checksum, iteration 2 00:28:53.700 21:28:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:28:53.700 21:28:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:28:53.700 21:28:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:53.700 21:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:53.700 21:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:53.700 21:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:53.700 21:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:53.700 21:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:53.700 [2024-07-14 21:28:05.004701] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:53.700 [2024-07-14 21:28:05.004888] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85771 ] 00:28:53.700 [2024-07-14 21:28:05.175732] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.958 [2024-07-14 21:28:05.339270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.064  Copying: 477/1024 [MB] (477 MBps) Copying: 950/1024 [MB] (473 MBps) Copying: 1024/1024 [MB] (average 474 MBps) 00:28:58.064 00:28:58.064 21:28:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:28:58.064 21:28:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:59.967 21:28:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:59.967 21:28:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=ea0f07d96ffa189c7589148cc2cbb41d 00:28:59.967 21:28:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:59.967 21:28:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:59.967 21:28:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:00.224 [2024-07-14 21:28:11.701205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.224 [2024-07-14 21:28:11.701259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:00.224 [2024-07-14 21:28:11.701294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:29:00.224 [2024-07-14 21:28:11.701304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.224 [2024-07-14 21:28:11.701337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.224 [2024-07-14 21:28:11.701361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:00.224 [2024-07-14 21:28:11.701372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:00.224 [2024-07-14 21:28:11.701390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.224 [2024-07-14 21:28:11.701417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.224 [2024-07-14 21:28:11.701430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:00.224 [2024-07-14 21:28:11.701453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:00.224 [2024-07-14 21:28:11.701463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.224 [2024-07-14 21:28:11.701532] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.336 ms, result 0 00:29:00.224 true 00:29:00.224 21:28:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:00.482 { 00:29:00.482 "name": "ftl", 00:29:00.482 "properties": [ 00:29:00.482 { 00:29:00.482 "name": "superblock_version", 00:29:00.482 "value": 5, 00:29:00.482 "read-only": true 00:29:00.482 }, 00:29:00.482 { 00:29:00.482 "name": "base_device", 00:29:00.482 "bands": [ 00:29:00.482 { 00:29:00.482 "id": 0, 00:29:00.482 "state": "FREE", 00:29:00.482 "validity": 0.0 00:29:00.482 }, 00:29:00.482 { 00:29:00.482 "id": 1, 00:29:00.482 "state": "FREE", 00:29:00.482 "validity": 0.0 00:29:00.482 }, 00:29:00.482 { 00:29:00.482 "id": 2, 00:29:00.482 "state": "FREE", 00:29:00.482 "validity": 0.0 00:29:00.482 }, 00:29:00.482 { 00:29:00.482 "id": 3, 00:29:00.482 "state": "FREE", 00:29:00.482 "validity": 0.0 00:29:00.482 }, 00:29:00.482 { 00:29:00.482 "id": 4, 00:29:00.482 "state": "FREE", 00:29:00.482 "validity": 0.0 00:29:00.482 }, 00:29:00.482 { 00:29:00.482 "id": 5, 00:29:00.482 "state": "FREE", 00:29:00.482 "validity": 0.0 00:29:00.482 }, 00:29:00.482 { 00:29:00.482 "id": 6, 00:29:00.482 "state": "FREE", 00:29:00.482 "validity": 0.0 00:29:00.482 }, 00:29:00.482 { 00:29:00.482 "id": 7, 00:29:00.482 "state": "FREE", 00:29:00.482 "validity": 0.0 00:29:00.482 }, 00:29:00.482 { 00:29:00.482 "id": 8, 00:29:00.482 "state": "FREE", 00:29:00.482 "validity": 0.0 00:29:00.482 }, 00:29:00.482 { 00:29:00.482 "id": 9, 00:29:00.482 "state": "FREE", 00:29:00.482 "validity": 0.0 00:29:00.482 }, 00:29:00.482 { 00:29:00.482 "id": 10, 00:29:00.482 "state": "FREE", 00:29:00.482 "validity": 0.0 00:29:00.482 }, 00:29:00.482 { 00:29:00.482 "id": 11, 00:29:00.482 "state": "FREE", 00:29:00.482 "validity": 0.0 00:29:00.482 }, 00:29:00.482 { 00:29:00.482 "id": 12, 00:29:00.482 "state": "FREE", 00:29:00.482 "validity": 0.0 00:29:00.482 }, 00:29:00.482 { 00:29:00.482 "id": 13, 00:29:00.482 "state": "FREE", 00:29:00.482 "validity": 0.0 00:29:00.482 }, 00:29:00.482 { 00:29:00.482 "id": 14, 00:29:00.482 "state": "FREE", 00:29:00.482 "validity": 0.0 00:29:00.482 }, 00:29:00.482 { 00:29:00.482 "id": 15, 00:29:00.482 "state": "FREE", 00:29:00.482 "validity": 0.0 00:29:00.482 }, 00:29:00.482 { 00:29:00.482 "id": 16, 00:29:00.482 "state": "FREE", 00:29:00.482 "validity": 0.0 00:29:00.482 }, 00:29:00.482 { 00:29:00.482 "id": 17, 00:29:00.482 "state": "FREE", 00:29:00.482 "validity": 0.0 00:29:00.482 } 00:29:00.482 ], 00:29:00.482 "read-only": true 00:29:00.482 }, 00:29:00.482 { 00:29:00.482 "name": "cache_device", 00:29:00.482 "type": "bdev", 00:29:00.482 "chunks": [ 00:29:00.482 { 00:29:00.482 "id": 0, 00:29:00.482 "state": "INACTIVE", 00:29:00.482 "utilization": 0.0 00:29:00.482 }, 00:29:00.482 { 00:29:00.482 "id": 1, 00:29:00.482 "state": "CLOSED", 00:29:00.482 "utilization": 1.0 00:29:00.482 }, 00:29:00.482 { 00:29:00.482 "id": 2, 00:29:00.482 "state": "CLOSED", 00:29:00.482 "utilization": 1.0 00:29:00.482 }, 00:29:00.482 { 00:29:00.482 "id": 3, 00:29:00.482 "state": "OPEN", 00:29:00.482 "utilization": 0.001953125 00:29:00.482 }, 00:29:00.482 { 00:29:00.482 "id": 4, 00:29:00.482 "state": "OPEN", 00:29:00.482 "utilization": 0.0 00:29:00.482 } 00:29:00.482 ], 00:29:00.482 "read-only": true 00:29:00.482 }, 00:29:00.482 { 00:29:00.482 "name": "verbose_mode", 00:29:00.482 "value": true, 00:29:00.482 "unit": "", 00:29:00.482 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:00.482 }, 00:29:00.482 { 00:29:00.482 "name": "prep_upgrade_on_shutdown", 00:29:00.482 "value": false, 00:29:00.482 "unit": "", 00:29:00.482 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:00.482 } 00:29:00.482 ] 00:29:00.482 } 00:29:00.482 21:28:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:29:00.740 [2024-07-14 21:28:12.221723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.740 [2024-07-14 21:28:12.221777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:00.740 [2024-07-14 21:28:12.221842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:00.740 [2024-07-14 21:28:12.221874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.740 [2024-07-14 21:28:12.221909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.740 [2024-07-14 21:28:12.221925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:00.740 [2024-07-14 21:28:12.221936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:00.740 [2024-07-14 21:28:12.221946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.740 [2024-07-14 21:28:12.221972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.740 [2024-07-14 21:28:12.221985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:00.740 [2024-07-14 21:28:12.221996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:00.740 [2024-07-14 21:28:12.222005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.740 [2024-07-14 21:28:12.222075] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.338 ms, result 0 00:29:00.740 true 00:29:00.740 21:28:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:29:00.740 21:28:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:00.741 21:28:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:00.999 21:28:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:29:00.999 21:28:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:29:00.999 21:28:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:01.258 [2024-07-14 21:28:12.746434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.258 [2024-07-14 21:28:12.746493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:01.258 [2024-07-14 21:28:12.746528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:01.258 [2024-07-14 21:28:12.746539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.258 [2024-07-14 21:28:12.746572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.258 [2024-07-14 21:28:12.746587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:01.258 [2024-07-14 21:28:12.746598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:01.258 [2024-07-14 21:28:12.746607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.258 [2024-07-14 21:28:12.746632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.258 [2024-07-14 21:28:12.746645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:01.258 [2024-07-14 21:28:12.746656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:01.258 [2024-07-14 21:28:12.746665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.258 [2024-07-14 21:28:12.746734] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.288 ms, result 0 00:29:01.258 true 00:29:01.258 21:28:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:01.516 { 00:29:01.516 "name": "ftl", 00:29:01.516 "properties": [ 00:29:01.516 { 00:29:01.516 "name": "superblock_version", 00:29:01.516 "value": 5, 00:29:01.516 "read-only": true 00:29:01.516 }, 00:29:01.516 { 00:29:01.516 "name": "base_device", 00:29:01.516 "bands": [ 00:29:01.516 { 00:29:01.516 "id": 0, 00:29:01.516 "state": "FREE", 00:29:01.516 "validity": 0.0 00:29:01.516 }, 00:29:01.516 { 00:29:01.516 "id": 1, 00:29:01.516 "state": "FREE", 00:29:01.516 "validity": 0.0 00:29:01.516 }, 00:29:01.516 { 00:29:01.516 "id": 2, 00:29:01.516 "state": "FREE", 00:29:01.516 "validity": 0.0 00:29:01.516 }, 00:29:01.516 { 00:29:01.516 "id": 3, 00:29:01.517 "state": "FREE", 00:29:01.517 "validity": 0.0 00:29:01.517 }, 00:29:01.517 { 00:29:01.517 "id": 4, 00:29:01.517 "state": "FREE", 00:29:01.517 "validity": 0.0 00:29:01.517 }, 00:29:01.517 { 00:29:01.517 "id": 5, 00:29:01.517 "state": "FREE", 00:29:01.517 "validity": 0.0 00:29:01.517 }, 00:29:01.517 { 00:29:01.517 "id": 6, 00:29:01.517 "state": "FREE", 00:29:01.517 "validity": 0.0 00:29:01.517 }, 00:29:01.517 { 00:29:01.517 "id": 7, 00:29:01.517 "state": "FREE", 00:29:01.517 "validity": 0.0 00:29:01.517 }, 00:29:01.517 { 00:29:01.517 "id": 8, 00:29:01.517 "state": "FREE", 00:29:01.517 "validity": 0.0 00:29:01.517 }, 00:29:01.517 { 00:29:01.517 "id": 9, 00:29:01.517 "state": "FREE", 00:29:01.517 "validity": 0.0 00:29:01.517 }, 00:29:01.517 { 00:29:01.517 "id": 10, 00:29:01.517 "state": "FREE", 00:29:01.517 "validity": 0.0 00:29:01.517 }, 00:29:01.517 { 00:29:01.517 "id": 11, 00:29:01.517 "state": "FREE", 00:29:01.517 "validity": 0.0 00:29:01.517 }, 00:29:01.517 { 00:29:01.517 "id": 12, 00:29:01.517 "state": "FREE", 00:29:01.517 "validity": 0.0 00:29:01.517 }, 00:29:01.517 { 00:29:01.517 "id": 13, 00:29:01.517 "state": "FREE", 00:29:01.517 "validity": 0.0 00:29:01.517 }, 00:29:01.517 { 00:29:01.517 "id": 14, 00:29:01.517 "state": "FREE", 00:29:01.517 "validity": 0.0 00:29:01.517 }, 00:29:01.517 { 00:29:01.517 "id": 15, 00:29:01.517 "state": "FREE", 00:29:01.517 "validity": 0.0 00:29:01.517 }, 00:29:01.517 { 00:29:01.517 "id": 16, 00:29:01.517 "state": "FREE", 00:29:01.517 "validity": 0.0 00:29:01.517 }, 00:29:01.517 { 00:29:01.517 "id": 17, 00:29:01.517 "state": "FREE", 00:29:01.517 "validity": 0.0 00:29:01.517 } 00:29:01.517 ], 00:29:01.517 "read-only": true 00:29:01.517 }, 00:29:01.517 { 00:29:01.517 "name": "cache_device", 00:29:01.517 "type": "bdev", 00:29:01.517 "chunks": [ 00:29:01.517 { 00:29:01.517 "id": 0, 00:29:01.517 "state": "INACTIVE", 00:29:01.517 "utilization": 0.0 00:29:01.517 }, 00:29:01.517 { 00:29:01.517 "id": 1, 00:29:01.517 "state": "CLOSED", 00:29:01.517 "utilization": 1.0 00:29:01.517 }, 00:29:01.517 { 00:29:01.517 "id": 2, 00:29:01.517 "state": "CLOSED", 00:29:01.517 "utilization": 1.0 00:29:01.517 }, 00:29:01.517 { 00:29:01.517 "id": 3, 00:29:01.517 "state": "OPEN", 00:29:01.517 "utilization": 0.001953125 00:29:01.517 }, 00:29:01.517 { 00:29:01.517 "id": 4, 00:29:01.517 "state": "OPEN", 00:29:01.517 "utilization": 0.0 00:29:01.517 } 00:29:01.517 ], 00:29:01.517 "read-only": true 00:29:01.517 }, 00:29:01.517 { 00:29:01.517 "name": "verbose_mode", 00:29:01.517 "value": true, 00:29:01.517 "unit": "", 00:29:01.517 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:01.517 }, 00:29:01.517 { 00:29:01.517 "name": "prep_upgrade_on_shutdown", 00:29:01.517 "value": true, 00:29:01.517 "unit": "", 00:29:01.517 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:01.517 } 00:29:01.517 ] 00:29:01.517 } 00:29:01.517 21:28:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:29:01.517 21:28:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85399 ]] 00:29:01.517 21:28:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85399 00:29:01.517 21:28:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 85399 ']' 00:29:01.517 21:28:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 85399 00:29:01.517 21:28:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:29:01.517 21:28:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:01.517 21:28:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85399 00:29:01.517 killing process with pid 85399 00:29:01.517 21:28:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:01.517 21:28:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:01.517 21:28:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85399' 00:29:01.517 21:28:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 85399 00:29:01.517 21:28:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 85399 00:29:02.454 [2024-07-14 21:28:13.873105] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:02.454 [2024-07-14 21:28:13.889274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.454 [2024-07-14 21:28:13.889315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:02.454 [2024-07-14 21:28:13.889355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:02.454 [2024-07-14 21:28:13.889366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.454 [2024-07-14 21:28:13.889408] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:02.454 [2024-07-14 21:28:13.892369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.454 [2024-07-14 21:28:13.892399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:02.454 [2024-07-14 21:28:13.892429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.940 ms 00:29:02.454 [2024-07-14 21:28:13.892439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.481 [2024-07-14 21:28:22.371483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.481 [2024-07-14 21:28:22.371583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:12.481 [2024-07-14 21:28:22.371621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8479.066 ms 00:29:12.481 [2024-07-14 21:28:22.371633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.481 [2024-07-14 21:28:22.372937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.481 [2024-07-14 21:28:22.372971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:12.481 [2024-07-14 21:28:22.372993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.280 ms 00:29:12.481 [2024-07-14 21:28:22.373005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.481 [2024-07-14 21:28:22.374377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.481 [2024-07-14 21:28:22.374407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:12.481 [2024-07-14 21:28:22.374437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.332 ms 00:29:12.481 [2024-07-14 21:28:22.374447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.481 [2024-07-14 21:28:22.387094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.481 [2024-07-14 21:28:22.387144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:12.481 [2024-07-14 21:28:22.387177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.573 ms 00:29:12.481 [2024-07-14 21:28:22.387188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.481 [2024-07-14 21:28:22.394934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.481 [2024-07-14 21:28:22.394993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:12.481 [2024-07-14 21:28:22.395025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.693 ms 00:29:12.481 [2024-07-14 21:28:22.395036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.481 [2024-07-14 21:28:22.395133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.481 [2024-07-14 21:28:22.395153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:12.481 [2024-07-14 21:28:22.395165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:29:12.481 [2024-07-14 21:28:22.395176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.481 [2024-07-14 21:28:22.407142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.481 [2024-07-14 21:28:22.407178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:29:12.481 [2024-07-14 21:28:22.407209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.945 ms 00:29:12.481 [2024-07-14 21:28:22.407235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.481 [2024-07-14 21:28:22.419684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.481 [2024-07-14 21:28:22.419720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:29:12.481 [2024-07-14 21:28:22.419750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.409 ms 00:29:12.481 [2024-07-14 21:28:22.419761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.481 [2024-07-14 21:28:22.431526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.481 [2024-07-14 21:28:22.431578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:12.481 [2024-07-14 21:28:22.431609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.711 ms 00:29:12.481 [2024-07-14 21:28:22.431619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.481 [2024-07-14 21:28:22.443475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.481 [2024-07-14 21:28:22.443511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:12.481 [2024-07-14 21:28:22.443542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.780 ms 00:29:12.481 [2024-07-14 21:28:22.443552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.481 [2024-07-14 21:28:22.443590] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:12.481 [2024-07-14 21:28:22.443618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:12.481 [2024-07-14 21:28:22.443632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:12.481 [2024-07-14 21:28:22.443644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:12.481 [2024-07-14 21:28:22.443656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:12.481 [2024-07-14 21:28:22.443667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:12.481 [2024-07-14 21:28:22.443678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:12.481 [2024-07-14 21:28:22.443689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:12.481 [2024-07-14 21:28:22.443700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:12.481 [2024-07-14 21:28:22.443711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:12.481 [2024-07-14 21:28:22.443722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:12.482 [2024-07-14 21:28:22.443733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:12.482 [2024-07-14 21:28:22.443744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:12.482 [2024-07-14 21:28:22.443755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:12.482 [2024-07-14 21:28:22.443766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:12.482 [2024-07-14 21:28:22.443777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:12.482 [2024-07-14 21:28:22.443820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:12.482 [2024-07-14 21:28:22.443835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:12.482 [2024-07-14 21:28:22.443846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:12.482 [2024-07-14 21:28:22.443876] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:12.482 [2024-07-14 21:28:22.443888] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 6f306d99-2f3d-4bb6-87a8-3468ef3a06e7 00:29:12.482 [2024-07-14 21:28:22.443899] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:12.482 [2024-07-14 21:28:22.443910] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:12.482 [2024-07-14 21:28:22.443920] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:12.482 [2024-07-14 21:28:22.443932] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:12.482 [2024-07-14 21:28:22.443943] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:12.482 [2024-07-14 21:28:22.443954] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:12.482 [2024-07-14 21:28:22.443965] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:12.482 [2024-07-14 21:28:22.443974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:12.482 [2024-07-14 21:28:22.443984] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:12.482 [2024-07-14 21:28:22.443995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.482 [2024-07-14 21:28:22.444006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:12.482 [2024-07-14 21:28:22.444025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.407 ms 00:29:12.482 [2024-07-14 21:28:22.444036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.482 [2024-07-14 21:28:22.460058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.482 [2024-07-14 21:28:22.460098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:12.482 [2024-07-14 21:28:22.460130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.997 ms 00:29:12.482 [2024-07-14 21:28:22.460141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.482 [2024-07-14 21:28:22.460571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.482 [2024-07-14 21:28:22.460595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:12.482 [2024-07-14 21:28:22.460607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.402 ms 00:29:12.482 [2024-07-14 21:28:22.460617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.482 [2024-07-14 21:28:22.509664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:12.482 [2024-07-14 21:28:22.509718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:12.482 [2024-07-14 21:28:22.509751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:12.482 [2024-07-14 21:28:22.509761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.482 [2024-07-14 21:28:22.509828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:12.482 [2024-07-14 21:28:22.509851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:12.482 [2024-07-14 21:28:22.509863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:12.482 [2024-07-14 21:28:22.509874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.482 [2024-07-14 21:28:22.509991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:12.482 [2024-07-14 21:28:22.510010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:12.482 [2024-07-14 21:28:22.510022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:12.482 [2024-07-14 21:28:22.510033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.482 [2024-07-14 21:28:22.510061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:12.482 [2024-07-14 21:28:22.510075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:12.482 [2024-07-14 21:28:22.510090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:12.482 [2024-07-14 21:28:22.510100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.482 [2024-07-14 21:28:22.603919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:12.482 [2024-07-14 21:28:22.603989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:12.482 [2024-07-14 21:28:22.604008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:12.482 [2024-07-14 21:28:22.604021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.482 [2024-07-14 21:28:22.686371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:12.482 [2024-07-14 21:28:22.686459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:12.482 [2024-07-14 21:28:22.686488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:12.482 [2024-07-14 21:28:22.686500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.482 [2024-07-14 21:28:22.686608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:12.482 [2024-07-14 21:28:22.686625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:12.482 [2024-07-14 21:28:22.686636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:12.482 [2024-07-14 21:28:22.686646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.482 [2024-07-14 21:28:22.686697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:12.482 [2024-07-14 21:28:22.686712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:12.482 [2024-07-14 21:28:22.686723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:12.482 [2024-07-14 21:28:22.686739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.482 [2024-07-14 21:28:22.686889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:12.482 [2024-07-14 21:28:22.686910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:12.482 [2024-07-14 21:28:22.686923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:12.482 [2024-07-14 21:28:22.686934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.482 [2024-07-14 21:28:22.686984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:12.482 [2024-07-14 21:28:22.687002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:12.482 [2024-07-14 21:28:22.687013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:12.482 [2024-07-14 21:28:22.687024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.482 [2024-07-14 21:28:22.687077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:12.482 [2024-07-14 21:28:22.687092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:12.482 [2024-07-14 21:28:22.687103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:12.482 [2024-07-14 21:28:22.687114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.482 [2024-07-14 21:28:22.687164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:12.482 [2024-07-14 21:28:22.687210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:12.482 [2024-07-14 21:28:22.687222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:12.482 [2024-07-14 21:28:22.687251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.482 [2024-07-14 21:28:22.687382] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8798.140 ms, result 0 00:29:15.766 21:28:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:15.766 21:28:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:15.766 21:28:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:15.766 21:28:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:15.766 21:28:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:15.766 21:28:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85985 00:29:15.766 21:28:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:15.766 21:28:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:15.766 21:28:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85985 00:29:15.766 21:28:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 85985 ']' 00:29:15.766 21:28:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.766 21:28:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:15.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.766 21:28:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.766 21:28:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:15.766 21:28:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:15.766 [2024-07-14 21:28:26.764852] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:15.766 [2024-07-14 21:28:26.765074] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85985 ] 00:29:15.766 [2024-07-14 21:28:26.925451] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.766 [2024-07-14 21:28:27.088310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.332 [2024-07-14 21:28:27.805099] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:16.332 [2024-07-14 21:28:27.805183] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:16.591 [2024-07-14 21:28:27.951391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.591 [2024-07-14 21:28:27.951444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:16.591 [2024-07-14 21:28:27.951484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:16.591 [2024-07-14 21:28:27.951495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.591 [2024-07-14 21:28:27.951557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.591 [2024-07-14 21:28:27.951575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:16.591 [2024-07-14 21:28:27.951586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:29:16.591 [2024-07-14 21:28:27.951595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.591 [2024-07-14 21:28:27.951624] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:16.591 [2024-07-14 21:28:27.952645] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:16.591 [2024-07-14 21:28:27.952692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.591 [2024-07-14 21:28:27.952708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:16.591 [2024-07-14 21:28:27.952721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.073 ms 00:29:16.591 [2024-07-14 21:28:27.952732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.591 [2024-07-14 21:28:27.953945] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:16.591 [2024-07-14 21:28:27.968391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.591 [2024-07-14 21:28:27.968433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:16.591 [2024-07-14 21:28:27.968468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.447 ms 00:29:16.591 [2024-07-14 21:28:27.968478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.591 [2024-07-14 21:28:27.968589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.591 [2024-07-14 21:28:27.968610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:16.591 [2024-07-14 21:28:27.968622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:29:16.591 [2024-07-14 21:28:27.968633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.591 [2024-07-14 21:28:27.972985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.591 [2024-07-14 21:28:27.973027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:16.591 [2024-07-14 21:28:27.973059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.254 ms 00:29:16.591 [2024-07-14 21:28:27.973069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.591 [2024-07-14 21:28:27.973148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.591 [2024-07-14 21:28:27.973166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:16.591 [2024-07-14 21:28:27.973178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:29:16.591 [2024-07-14 21:28:27.973191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.591 [2024-07-14 21:28:27.973265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.591 [2024-07-14 21:28:27.973281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:16.591 [2024-07-14 21:28:27.973291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:16.591 [2024-07-14 21:28:27.973301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.591 [2024-07-14 21:28:27.973334] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:16.591 [2024-07-14 21:28:27.977320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.591 [2024-07-14 21:28:27.977354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:16.591 [2024-07-14 21:28:27.977386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.995 ms 00:29:16.591 [2024-07-14 21:28:27.977397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.591 [2024-07-14 21:28:27.977432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.591 [2024-07-14 21:28:27.977446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:16.591 [2024-07-14 21:28:27.977457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:16.591 [2024-07-14 21:28:27.977471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.591 [2024-07-14 21:28:27.977514] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:16.591 [2024-07-14 21:28:27.977542] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:16.591 [2024-07-14 21:28:27.977579] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:16.591 [2024-07-14 21:28:27.977611] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:29:16.591 [2024-07-14 21:28:27.977701] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:16.591 [2024-07-14 21:28:27.977714] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:16.591 [2024-07-14 21:28:27.977736] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:29:16.591 [2024-07-14 21:28:27.977757] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:16.591 [2024-07-14 21:28:27.977777] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:16.591 [2024-07-14 21:28:27.977796] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:16.591 [2024-07-14 21:28:27.977812] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:16.591 [2024-07-14 21:28:27.977871] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:16.591 [2024-07-14 21:28:27.977893] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:16.591 [2024-07-14 21:28:27.977912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.591 [2024-07-14 21:28:27.977930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:16.591 [2024-07-14 21:28:27.977946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.400 ms 00:29:16.591 [2024-07-14 21:28:27.977960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.591 [2024-07-14 21:28:27.978086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.591 [2024-07-14 21:28:27.978109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:16.591 [2024-07-14 21:28:27.978126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.084 ms 00:29:16.591 [2024-07-14 21:28:27.978155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.591 [2024-07-14 21:28:27.978336] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:16.591 [2024-07-14 21:28:27.978369] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:16.592 [2024-07-14 21:28:27.978390] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:16.592 [2024-07-14 21:28:27.978414] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:16.592 [2024-07-14 21:28:27.978437] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:16.592 [2024-07-14 21:28:27.978460] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:16.592 [2024-07-14 21:28:27.978486] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:16.592 [2024-07-14 21:28:27.978502] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:16.592 [2024-07-14 21:28:27.978518] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:16.592 [2024-07-14 21:28:27.978528] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:16.592 [2024-07-14 21:28:27.978537] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:16.592 [2024-07-14 21:28:27.978546] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:16.592 [2024-07-14 21:28:27.978555] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:16.592 [2024-07-14 21:28:27.978564] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:16.592 [2024-07-14 21:28:27.978573] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:16.592 [2024-07-14 21:28:27.978582] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:16.592 [2024-07-14 21:28:27.978593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:16.592 [2024-07-14 21:28:27.978602] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:16.592 [2024-07-14 21:28:27.978611] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:16.592 [2024-07-14 21:28:27.978620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:16.592 [2024-07-14 21:28:27.978629] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:16.592 [2024-07-14 21:28:27.978638] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:16.592 [2024-07-14 21:28:27.978647] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:16.592 [2024-07-14 21:28:27.978656] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:16.592 [2024-07-14 21:28:27.978665] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:16.592 [2024-07-14 21:28:27.978673] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:16.592 [2024-07-14 21:28:27.978682] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:16.592 [2024-07-14 21:28:27.978691] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:16.592 [2024-07-14 21:28:27.978700] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:16.592 [2024-07-14 21:28:27.978709] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:16.592 [2024-07-14 21:28:27.978718] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:16.592 [2024-07-14 21:28:27.978726] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:16.592 [2024-07-14 21:28:27.978735] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:16.592 [2024-07-14 21:28:27.978744] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:16.592 [2024-07-14 21:28:27.978753] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:16.592 [2024-07-14 21:28:27.978762] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:16.592 [2024-07-14 21:28:27.978771] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:16.592 [2024-07-14 21:28:27.978780] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:16.592 [2024-07-14 21:28:27.978789] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:16.592 [2024-07-14 21:28:27.978850] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:16.592 [2024-07-14 21:28:27.978864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:16.592 [2024-07-14 21:28:27.978874] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:16.592 [2024-07-14 21:28:27.978883] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:16.592 [2024-07-14 21:28:27.978892] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:16.592 [2024-07-14 21:28:27.978909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:16.592 [2024-07-14 21:28:27.978921] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:16.592 [2024-07-14 21:28:27.978938] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:16.592 [2024-07-14 21:28:27.978964] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:16.592 [2024-07-14 21:28:27.978976] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:16.592 [2024-07-14 21:28:27.978986] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:16.592 [2024-07-14 21:28:27.978996] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:16.592 [2024-07-14 21:28:27.979018] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:16.592 [2024-07-14 21:28:27.979028] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:16.592 [2024-07-14 21:28:27.979041] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:16.592 [2024-07-14 21:28:27.979055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:16.592 [2024-07-14 21:28:27.979067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:16.592 [2024-07-14 21:28:27.979077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:16.592 [2024-07-14 21:28:27.979088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:16.592 [2024-07-14 21:28:27.979098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:16.592 [2024-07-14 21:28:27.979109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:16.592 [2024-07-14 21:28:27.979119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:16.592 [2024-07-14 21:28:27.979130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:16.592 [2024-07-14 21:28:27.979140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:16.592 [2024-07-14 21:28:27.979150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:16.592 [2024-07-14 21:28:27.979161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:16.592 [2024-07-14 21:28:27.979171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:16.592 [2024-07-14 21:28:27.979181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:16.592 [2024-07-14 21:28:27.979192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:16.592 [2024-07-14 21:28:27.979203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:16.592 [2024-07-14 21:28:27.979228] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:16.592 [2024-07-14 21:28:27.979257] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:16.592 [2024-07-14 21:28:27.979268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:16.592 [2024-07-14 21:28:27.979278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:16.592 [2024-07-14 21:28:27.979289] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:16.592 [2024-07-14 21:28:27.979299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:16.592 [2024-07-14 21:28:27.979311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.592 [2024-07-14 21:28:27.979322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:16.592 [2024-07-14 21:28:27.979333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.057 ms 00:29:16.592 [2024-07-14 21:28:27.979364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.592 [2024-07-14 21:28:27.979450] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:16.592 [2024-07-14 21:28:27.979469] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:19.125 [2024-07-14 21:28:30.132357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.125 [2024-07-14 21:28:30.132426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:19.125 [2024-07-14 21:28:30.132462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2152.919 ms 00:29:19.125 [2024-07-14 21:28:30.132473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.125 [2024-07-14 21:28:30.160655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.125 [2024-07-14 21:28:30.160710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:19.125 [2024-07-14 21:28:30.160746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.900 ms 00:29:19.125 [2024-07-14 21:28:30.160764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.125 [2024-07-14 21:28:30.161019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.125 [2024-07-14 21:28:30.161042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:19.125 [2024-07-14 21:28:30.161055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:19.125 [2024-07-14 21:28:30.161067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.125 [2024-07-14 21:28:30.195716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.125 [2024-07-14 21:28:30.195770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:19.125 [2024-07-14 21:28:30.195804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.582 ms 00:29:19.125 [2024-07-14 21:28:30.195846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.125 [2024-07-14 21:28:30.195930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.125 [2024-07-14 21:28:30.195945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:19.125 [2024-07-14 21:28:30.195957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:19.125 [2024-07-14 21:28:30.195967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.125 [2024-07-14 21:28:30.196363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.125 [2024-07-14 21:28:30.196387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:19.125 [2024-07-14 21:28:30.196405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.326 ms 00:29:19.125 [2024-07-14 21:28:30.196416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.125 [2024-07-14 21:28:30.196479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.125 [2024-07-14 21:28:30.196494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:19.125 [2024-07-14 21:28:30.196505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:29:19.125 [2024-07-14 21:28:30.196530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.125 [2024-07-14 21:28:30.212241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.125 [2024-07-14 21:28:30.212284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:19.125 [2024-07-14 21:28:30.212317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.639 ms 00:29:19.125 [2024-07-14 21:28:30.212328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.125 [2024-07-14 21:28:30.226898] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:19.125 [2024-07-14 21:28:30.226938] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:19.125 [2024-07-14 21:28:30.226971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.125 [2024-07-14 21:28:30.226982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:29:19.125 [2024-07-14 21:28:30.226993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.515 ms 00:29:19.125 [2024-07-14 21:28:30.227003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.125 [2024-07-14 21:28:30.243096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.125 [2024-07-14 21:28:30.243135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:29:19.125 [2024-07-14 21:28:30.243167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.042 ms 00:29:19.125 [2024-07-14 21:28:30.243177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.125 [2024-07-14 21:28:30.256754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.125 [2024-07-14 21:28:30.256807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:29:19.125 [2024-07-14 21:28:30.256843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.523 ms 00:29:19.125 [2024-07-14 21:28:30.256868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.125 [2024-07-14 21:28:30.270655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.125 [2024-07-14 21:28:30.270694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:29:19.125 [2024-07-14 21:28:30.270725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.726 ms 00:29:19.125 [2024-07-14 21:28:30.270735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.125 [2024-07-14 21:28:30.271628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.125 [2024-07-14 21:28:30.271664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:19.125 [2024-07-14 21:28:30.271679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.728 ms 00:29:19.125 [2024-07-14 21:28:30.271690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.125 [2024-07-14 21:28:30.343766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.125 [2024-07-14 21:28:30.343861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:19.125 [2024-07-14 21:28:30.343900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 72.039 ms 00:29:19.125 [2024-07-14 21:28:30.343911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.125 [2024-07-14 21:28:30.355099] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:19.125 [2024-07-14 21:28:30.355746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.125 [2024-07-14 21:28:30.355786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:19.125 [2024-07-14 21:28:30.355815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.747 ms 00:29:19.125 [2024-07-14 21:28:30.355834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.125 [2024-07-14 21:28:30.355938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.125 [2024-07-14 21:28:30.355956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:29:19.125 [2024-07-14 21:28:30.355968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:19.125 [2024-07-14 21:28:30.355978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.125 [2024-07-14 21:28:30.356050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.125 [2024-07-14 21:28:30.356066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:19.125 [2024-07-14 21:28:30.356078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:29:19.125 [2024-07-14 21:28:30.356088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.125 [2024-07-14 21:28:30.356127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.125 [2024-07-14 21:28:30.356141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:19.125 [2024-07-14 21:28:30.356152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:19.125 [2024-07-14 21:28:30.356162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.125 [2024-07-14 21:28:30.356198] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:19.125 [2024-07-14 21:28:30.356213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.125 [2024-07-14 21:28:30.356223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:19.125 [2024-07-14 21:28:30.356234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:19.125 [2024-07-14 21:28:30.356243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.125 [2024-07-14 21:28:30.384235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.125 [2024-07-14 21:28:30.384291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:19.125 [2024-07-14 21:28:30.384324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.947 ms 00:29:19.125 [2024-07-14 21:28:30.384334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.125 [2024-07-14 21:28:30.384422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.125 [2024-07-14 21:28:30.384441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:19.125 [2024-07-14 21:28:30.384452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:29:19.125 [2024-07-14 21:28:30.384462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.125 [2024-07-14 21:28:30.385676] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2433.762 ms, result 0 00:29:19.125 [2024-07-14 21:28:30.400719] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.125 [2024-07-14 21:28:30.416774] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:19.125 [2024-07-14 21:28:30.425411] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:19.693 21:28:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:19.693 21:28:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:29:19.693 21:28:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:19.693 21:28:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:19.693 21:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:19.952 [2024-07-14 21:28:31.422341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.952 [2024-07-14 21:28:31.422396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:19.952 [2024-07-14 21:28:31.422433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:19.952 [2024-07-14 21:28:31.422444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.952 [2024-07-14 21:28:31.422477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.952 [2024-07-14 21:28:31.422497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:19.952 [2024-07-14 21:28:31.422509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:19.952 [2024-07-14 21:28:31.422519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.952 [2024-07-14 21:28:31.422545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.952 [2024-07-14 21:28:31.422558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:19.952 [2024-07-14 21:28:31.422569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:19.952 [2024-07-14 21:28:31.422578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.952 [2024-07-14 21:28:31.422650] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.296 ms, result 0 00:29:19.952 true 00:29:19.952 21:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:20.212 { 00:29:20.212 "name": "ftl", 00:29:20.212 "properties": [ 00:29:20.212 { 00:29:20.212 "name": "superblock_version", 00:29:20.212 "value": 5, 00:29:20.212 "read-only": true 00:29:20.212 }, 00:29:20.212 { 00:29:20.212 "name": "base_device", 00:29:20.212 "bands": [ 00:29:20.212 { 00:29:20.212 "id": 0, 00:29:20.212 "state": "CLOSED", 00:29:20.212 "validity": 1.0 00:29:20.212 }, 00:29:20.212 { 00:29:20.212 "id": 1, 00:29:20.212 "state": "CLOSED", 00:29:20.212 "validity": 1.0 00:29:20.212 }, 00:29:20.212 { 00:29:20.212 "id": 2, 00:29:20.212 "state": "CLOSED", 00:29:20.212 "validity": 0.007843137254901933 00:29:20.212 }, 00:29:20.212 { 00:29:20.212 "id": 3, 00:29:20.212 "state": "FREE", 00:29:20.212 "validity": 0.0 00:29:20.212 }, 00:29:20.212 { 00:29:20.212 "id": 4, 00:29:20.212 "state": "FREE", 00:29:20.212 "validity": 0.0 00:29:20.212 }, 00:29:20.212 { 00:29:20.212 "id": 5, 00:29:20.212 "state": "FREE", 00:29:20.212 "validity": 0.0 00:29:20.212 }, 00:29:20.212 { 00:29:20.212 "id": 6, 00:29:20.212 "state": "FREE", 00:29:20.212 "validity": 0.0 00:29:20.212 }, 00:29:20.212 { 00:29:20.212 "id": 7, 00:29:20.212 "state": "FREE", 00:29:20.212 "validity": 0.0 00:29:20.212 }, 00:29:20.212 { 00:29:20.212 "id": 8, 00:29:20.212 "state": "FREE", 00:29:20.212 "validity": 0.0 00:29:20.212 }, 00:29:20.212 { 00:29:20.212 "id": 9, 00:29:20.212 "state": "FREE", 00:29:20.212 "validity": 0.0 00:29:20.212 }, 00:29:20.212 { 00:29:20.212 "id": 10, 00:29:20.212 "state": "FREE", 00:29:20.212 "validity": 0.0 00:29:20.212 }, 00:29:20.212 { 00:29:20.212 "id": 11, 00:29:20.212 "state": "FREE", 00:29:20.212 "validity": 0.0 00:29:20.212 }, 00:29:20.212 { 00:29:20.212 "id": 12, 00:29:20.212 "state": "FREE", 00:29:20.212 "validity": 0.0 00:29:20.212 }, 00:29:20.212 { 00:29:20.212 "id": 13, 00:29:20.212 "state": "FREE", 00:29:20.212 "validity": 0.0 00:29:20.212 }, 00:29:20.212 { 00:29:20.212 "id": 14, 00:29:20.212 "state": "FREE", 00:29:20.212 "validity": 0.0 00:29:20.212 }, 00:29:20.212 { 00:29:20.212 "id": 15, 00:29:20.212 "state": "FREE", 00:29:20.212 "validity": 0.0 00:29:20.212 }, 00:29:20.212 { 00:29:20.212 "id": 16, 00:29:20.212 "state": "FREE", 00:29:20.212 "validity": 0.0 00:29:20.212 }, 00:29:20.212 { 00:29:20.212 "id": 17, 00:29:20.212 "state": "FREE", 00:29:20.212 "validity": 0.0 00:29:20.212 } 00:29:20.212 ], 00:29:20.212 "read-only": true 00:29:20.212 }, 00:29:20.212 { 00:29:20.212 "name": "cache_device", 00:29:20.212 "type": "bdev", 00:29:20.212 "chunks": [ 00:29:20.212 { 00:29:20.212 "id": 0, 00:29:20.212 "state": "INACTIVE", 00:29:20.212 "utilization": 0.0 00:29:20.212 }, 00:29:20.212 { 00:29:20.212 "id": 1, 00:29:20.212 "state": "OPEN", 00:29:20.212 "utilization": 0.0 00:29:20.212 }, 00:29:20.212 { 00:29:20.212 "id": 2, 00:29:20.212 "state": "OPEN", 00:29:20.212 "utilization": 0.0 00:29:20.212 }, 00:29:20.212 { 00:29:20.212 "id": 3, 00:29:20.212 "state": "FREE", 00:29:20.212 "utilization": 0.0 00:29:20.212 }, 00:29:20.212 { 00:29:20.212 "id": 4, 00:29:20.212 "state": "FREE", 00:29:20.212 "utilization": 0.0 00:29:20.212 } 00:29:20.212 ], 00:29:20.212 "read-only": true 00:29:20.212 }, 00:29:20.212 { 00:29:20.212 "name": "verbose_mode", 00:29:20.212 "value": true, 00:29:20.212 "unit": "", 00:29:20.212 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:20.212 }, 00:29:20.212 { 00:29:20.212 "name": "prep_upgrade_on_shutdown", 00:29:20.212 "value": false, 00:29:20.212 "unit": "", 00:29:20.212 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:20.212 } 00:29:20.212 ] 00:29:20.212 } 00:29:20.212 21:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:29:20.212 21:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:20.212 21:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:20.471 21:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:29:20.471 21:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:29:20.471 21:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:29:20.471 21:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:29:20.471 21:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:20.731 21:28:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:29:20.731 21:28:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:29:20.731 21:28:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:29:20.731 21:28:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:20.731 21:28:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:20.731 21:28:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:20.731 21:28:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:20.731 Validate MD5 checksum, iteration 1 00:29:20.731 21:28:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:20.731 21:28:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:20.731 21:28:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:20.731 21:28:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:20.731 21:28:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:20.731 21:28:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:20.731 [2024-07-14 21:28:32.251163] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:20.731 [2024-07-14 21:28:32.251335] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86054 ] 00:29:20.992 [2024-07-14 21:28:32.410026] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.254 [2024-07-14 21:28:32.582913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.376  Copying: 504/1024 [MB] (504 MBps) Copying: 997/1024 [MB] (493 MBps) Copying: 1024/1024 [MB] (average 496 MBps) 00:29:25.376 00:29:25.376 21:28:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:25.376 21:28:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:27.910 21:28:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:27.910 Validate MD5 checksum, iteration 2 00:29:27.910 21:28:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=0bca3a435c84acf71cc8f53676caa40c 00:29:27.910 21:28:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 0bca3a435c84acf71cc8f53676caa40c != \0\b\c\a\3\a\4\3\5\c\8\4\a\c\f\7\1\c\c\8\f\5\3\6\7\6\c\a\a\4\0\c ]] 00:29:27.910 21:28:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:27.910 21:28:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:27.910 21:28:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:27.910 21:28:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:27.911 21:28:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:27.911 21:28:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:27.911 21:28:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:27.911 21:28:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:27.911 21:28:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:27.911 [2024-07-14 21:28:39.060563] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:27.911 [2024-07-14 21:28:39.060729] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86127 ] 00:29:27.911 [2024-07-14 21:28:39.235234] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.911 [2024-07-14 21:28:39.428652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.653  Copying: 489/1024 [MB] (489 MBps) Copying: 982/1024 [MB] (493 MBps) Copying: 1024/1024 [MB] (average 489 MBps) 00:29:32.653 00:29:32.653 21:28:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:32.653 21:28:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:34.550 21:28:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:34.550 21:28:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ea0f07d96ffa189c7589148cc2cbb41d 00:29:34.550 21:28:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ea0f07d96ffa189c7589148cc2cbb41d != \e\a\0\f\0\7\d\9\6\f\f\a\1\8\9\c\7\5\8\9\1\4\8\c\c\2\c\b\b\4\1\d ]] 00:29:34.550 21:28:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:34.550 21:28:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:34.550 21:28:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:29:34.550 21:28:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 85985 ]] 00:29:34.550 21:28:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 85985 00:29:34.550 21:28:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:29:34.550 21:28:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:29:34.550 21:28:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:34.550 21:28:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:34.550 21:28:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:34.550 21:28:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:34.550 21:28:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86198 00:29:34.550 21:28:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:34.550 21:28:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86198 00:29:34.550 21:28:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86198 ']' 00:29:34.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.550 21:28:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.550 21:28:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:34.550 21:28:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.550 21:28:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:34.550 21:28:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:34.551 [2024-07-14 21:28:45.875013] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:34.551 [2024-07-14 21:28:45.875156] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86198 ] 00:29:34.551 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 85985 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:29:34.551 [2024-07-14 21:28:46.037563] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.808 [2024-07-14 21:28:46.214378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.373 [2024-07-14 21:28:46.910636] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:35.373 [2024-07-14 21:28:46.910718] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:35.632 [2024-07-14 21:28:47.056778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.632 [2024-07-14 21:28:47.056890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:35.632 [2024-07-14 21:28:47.056917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:35.632 [2024-07-14 21:28:47.056944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.632 [2024-07-14 21:28:47.057011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.632 [2024-07-14 21:28:47.057030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:35.632 [2024-07-14 21:28:47.057042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:29:35.632 [2024-07-14 21:28:47.057052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.632 [2024-07-14 21:28:47.057100] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:35.632 [2024-07-14 21:28:47.058081] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:35.632 [2024-07-14 21:28:47.058115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.632 [2024-07-14 21:28:47.058144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:35.632 [2024-07-14 21:28:47.058156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.022 ms 00:29:35.632 [2024-07-14 21:28:47.058181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.632 [2024-07-14 21:28:47.058684] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:35.632 [2024-07-14 21:28:47.077623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.632 [2024-07-14 21:28:47.077661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:35.632 [2024-07-14 21:28:47.077693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.941 ms 00:29:35.632 [2024-07-14 21:28:47.077709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.632 [2024-07-14 21:28:47.088767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.632 [2024-07-14 21:28:47.088817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:35.632 [2024-07-14 21:28:47.088834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:29:35.632 [2024-07-14 21:28:47.088845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.632 [2024-07-14 21:28:47.089374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.632 [2024-07-14 21:28:47.089423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:35.632 [2024-07-14 21:28:47.089443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.396 ms 00:29:35.632 [2024-07-14 21:28:47.089454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.632 [2024-07-14 21:28:47.089516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.632 [2024-07-14 21:28:47.089534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:35.632 [2024-07-14 21:28:47.089562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:29:35.633 [2024-07-14 21:28:47.089572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.633 [2024-07-14 21:28:47.089623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.633 [2024-07-14 21:28:47.089652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:35.633 [2024-07-14 21:28:47.089681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:29:35.633 [2024-07-14 21:28:47.089711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.633 [2024-07-14 21:28:47.089743] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:35.633 [2024-07-14 21:28:47.093420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.633 [2024-07-14 21:28:47.093456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:35.633 [2024-07-14 21:28:47.093487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.684 ms 00:29:35.633 [2024-07-14 21:28:47.093497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.633 [2024-07-14 21:28:47.093535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.633 [2024-07-14 21:28:47.093551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:35.633 [2024-07-14 21:28:47.093562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:35.633 [2024-07-14 21:28:47.093573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.633 [2024-07-14 21:28:47.093613] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:35.633 [2024-07-14 21:28:47.093640] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:35.633 [2024-07-14 21:28:47.093679] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:35.633 [2024-07-14 21:28:47.093698] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:29:35.633 [2024-07-14 21:28:47.093790] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:35.633 [2024-07-14 21:28:47.093804] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:35.633 [2024-07-14 21:28:47.093849] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:29:35.633 [2024-07-14 21:28:47.093864] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:35.633 [2024-07-14 21:28:47.093877] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:35.633 [2024-07-14 21:28:47.093888] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:35.633 [2024-07-14 21:28:47.093899] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:35.633 [2024-07-14 21:28:47.093929] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:35.633 [2024-07-14 21:28:47.093940] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:35.633 [2024-07-14 21:28:47.093951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.633 [2024-07-14 21:28:47.093962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:35.633 [2024-07-14 21:28:47.093977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.341 ms 00:29:35.633 [2024-07-14 21:28:47.093987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.633 [2024-07-14 21:28:47.094074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.633 [2024-07-14 21:28:47.094087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:35.633 [2024-07-14 21:28:47.094098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:29:35.633 [2024-07-14 21:28:47.094108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.633 [2024-07-14 21:28:47.094248] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:35.633 [2024-07-14 21:28:47.094264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:35.633 [2024-07-14 21:28:47.094276] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:35.633 [2024-07-14 21:28:47.094287] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:35.633 [2024-07-14 21:28:47.094298] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:35.633 [2024-07-14 21:28:47.094308] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:35.633 [2024-07-14 21:28:47.094318] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:35.633 [2024-07-14 21:28:47.094328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:35.633 [2024-07-14 21:28:47.094338] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:35.633 [2024-07-14 21:28:47.094348] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:35.633 [2024-07-14 21:28:47.094357] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:35.633 [2024-07-14 21:28:47.094367] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:35.633 [2024-07-14 21:28:47.094376] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:35.633 [2024-07-14 21:28:47.094386] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:35.633 [2024-07-14 21:28:47.094396] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:35.633 [2024-07-14 21:28:47.094405] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:35.633 [2024-07-14 21:28:47.094415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:35.633 [2024-07-14 21:28:47.094425] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:35.633 [2024-07-14 21:28:47.094434] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:35.633 [2024-07-14 21:28:47.094444] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:35.633 [2024-07-14 21:28:47.094454] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:35.633 [2024-07-14 21:28:47.094464] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:35.633 [2024-07-14 21:28:47.094473] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:35.633 [2024-07-14 21:28:47.094483] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:35.633 [2024-07-14 21:28:47.094492] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:35.633 [2024-07-14 21:28:47.094502] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:35.633 [2024-07-14 21:28:47.094512] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:35.633 [2024-07-14 21:28:47.094521] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:35.633 [2024-07-14 21:28:47.094530] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:35.633 [2024-07-14 21:28:47.094540] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:35.633 [2024-07-14 21:28:47.094550] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:35.633 [2024-07-14 21:28:47.094559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:35.633 [2024-07-14 21:28:47.094569] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:35.633 [2024-07-14 21:28:47.094578] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:35.633 [2024-07-14 21:28:47.094604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:35.633 [2024-07-14 21:28:47.094614] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:35.633 [2024-07-14 21:28:47.094624] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:35.633 [2024-07-14 21:28:47.094633] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:35.633 [2024-07-14 21:28:47.094643] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:35.633 [2024-07-14 21:28:47.094652] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:35.633 [2024-07-14 21:28:47.094661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:35.633 [2024-07-14 21:28:47.094671] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:35.633 [2024-07-14 21:28:47.094680] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:35.633 [2024-07-14 21:28:47.094705] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:35.633 [2024-07-14 21:28:47.094716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:35.633 [2024-07-14 21:28:47.094728] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:35.633 [2024-07-14 21:28:47.094738] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:35.633 [2024-07-14 21:28:47.094749] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:35.633 [2024-07-14 21:28:47.094759] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:35.633 [2024-07-14 21:28:47.094780] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:35.633 [2024-07-14 21:28:47.094791] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:35.633 [2024-07-14 21:28:47.094801] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:35.633 [2024-07-14 21:28:47.094811] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:35.633 [2024-07-14 21:28:47.094822] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:35.633 [2024-07-14 21:28:47.094839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:35.633 [2024-07-14 21:28:47.094851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:35.633 [2024-07-14 21:28:47.094862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:35.633 [2024-07-14 21:28:47.094887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:35.633 [2024-07-14 21:28:47.094899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:35.633 [2024-07-14 21:28:47.094910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:35.633 [2024-07-14 21:28:47.094937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:35.633 [2024-07-14 21:28:47.094948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:35.633 [2024-07-14 21:28:47.094958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:35.633 [2024-07-14 21:28:47.094969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:35.633 [2024-07-14 21:28:47.094980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:35.633 [2024-07-14 21:28:47.094991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:35.633 [2024-07-14 21:28:47.095002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:35.633 [2024-07-14 21:28:47.095014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:35.633 [2024-07-14 21:28:47.095025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:35.633 [2024-07-14 21:28:47.095036] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:35.633 [2024-07-14 21:28:47.095048] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:35.633 [2024-07-14 21:28:47.095060] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:35.634 [2024-07-14 21:28:47.095071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:35.634 [2024-07-14 21:28:47.095082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:35.634 [2024-07-14 21:28:47.095108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:35.634 [2024-07-14 21:28:47.095120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.634 [2024-07-14 21:28:47.095132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:35.634 [2024-07-14 21:28:47.095145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.949 ms 00:29:35.634 [2024-07-14 21:28:47.095155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.634 [2024-07-14 21:28:47.125075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.634 [2024-07-14 21:28:47.125146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:35.634 [2024-07-14 21:28:47.125165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.836 ms 00:29:35.634 [2024-07-14 21:28:47.125177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.634 [2024-07-14 21:28:47.125269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.634 [2024-07-14 21:28:47.125283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:35.634 [2024-07-14 21:28:47.125293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:29:35.634 [2024-07-14 21:28:47.125308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.634 [2024-07-14 21:28:47.158970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.634 [2024-07-14 21:28:47.159018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:35.634 [2024-07-14 21:28:47.159051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.584 ms 00:29:35.634 [2024-07-14 21:28:47.159065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.634 [2024-07-14 21:28:47.159127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.634 [2024-07-14 21:28:47.159146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:35.634 [2024-07-14 21:28:47.159158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:35.634 [2024-07-14 21:28:47.159168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.634 [2024-07-14 21:28:47.159312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.634 [2024-07-14 21:28:47.159328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:35.634 [2024-07-14 21:28:47.159339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:29:35.634 [2024-07-14 21:28:47.159349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.634 [2024-07-14 21:28:47.159397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.634 [2024-07-14 21:28:47.159411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:35.634 [2024-07-14 21:28:47.159425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:35.634 [2024-07-14 21:28:47.159435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.634 [2024-07-14 21:28:47.174754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.634 [2024-07-14 21:28:47.174825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:35.634 [2024-07-14 21:28:47.174854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.295 ms 00:29:35.634 [2024-07-14 21:28:47.174865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.634 [2024-07-14 21:28:47.175004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.634 [2024-07-14 21:28:47.175022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:29:35.634 [2024-07-14 21:28:47.175033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:35.634 [2024-07-14 21:28:47.175043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.892 [2024-07-14 21:28:47.206783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.892 [2024-07-14 21:28:47.206861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:29:35.892 [2024-07-14 21:28:47.206880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.717 ms 00:29:35.892 [2024-07-14 21:28:47.206891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.892 [2024-07-14 21:28:47.217606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.892 [2024-07-14 21:28:47.217642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:35.892 [2024-07-14 21:28:47.217672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.538 ms 00:29:35.893 [2024-07-14 21:28:47.217682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.893 [2024-07-14 21:28:47.281451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.893 [2024-07-14 21:28:47.281509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:35.893 [2024-07-14 21:28:47.281544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 63.705 ms 00:29:35.893 [2024-07-14 21:28:47.281554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.893 [2024-07-14 21:28:47.281737] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:29:35.893 [2024-07-14 21:28:47.281892] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:29:35.893 [2024-07-14 21:28:47.282013] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:29:35.893 [2024-07-14 21:28:47.282120] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:29:35.893 [2024-07-14 21:28:47.282132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.893 [2024-07-14 21:28:47.282142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:29:35.893 [2024-07-14 21:28:47.282154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.515 ms 00:29:35.893 [2024-07-14 21:28:47.282163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.893 [2024-07-14 21:28:47.282253] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:29:35.893 [2024-07-14 21:28:47.282272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.893 [2024-07-14 21:28:47.282281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:29:35.893 [2024-07-14 21:28:47.282292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:35.893 [2024-07-14 21:28:47.282302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.893 [2024-07-14 21:28:47.299058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.893 [2024-07-14 21:28:47.299099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:29:35.893 [2024-07-14 21:28:47.299133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.730 ms 00:29:35.893 [2024-07-14 21:28:47.299149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.893 [2024-07-14 21:28:47.309401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.893 [2024-07-14 21:28:47.309437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:29:35.893 [2024-07-14 21:28:47.309469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:35.893 [2024-07-14 21:28:47.309480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.893 [2024-07-14 21:28:47.309688] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:29:36.461 [2024-07-14 21:28:47.875476] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:29:36.461 [2024-07-14 21:28:47.875692] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:29:37.028 [2024-07-14 21:28:48.455118] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:29:37.028 [2024-07-14 21:28:48.455233] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:37.028 [2024-07-14 21:28:48.455263] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:37.028 [2024-07-14 21:28:48.455281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.028 [2024-07-14 21:28:48.455294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:29:37.028 [2024-07-14 21:28:48.455338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1145.713 ms 00:29:37.028 [2024-07-14 21:28:48.455349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.028 [2024-07-14 21:28:48.455422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.028 [2024-07-14 21:28:48.455436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:29:37.028 [2024-07-14 21:28:48.455448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:37.028 [2024-07-14 21:28:48.455458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.028 [2024-07-14 21:28:48.467528] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:37.028 [2024-07-14 21:28:48.467674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.028 [2024-07-14 21:28:48.467695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:37.028 [2024-07-14 21:28:48.467708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.196 ms 00:29:37.028 [2024-07-14 21:28:48.467718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.028 [2024-07-14 21:28:48.468596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.028 [2024-07-14 21:28:48.468628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:29:37.028 [2024-07-14 21:28:48.468644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.772 ms 00:29:37.028 [2024-07-14 21:28:48.468656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.028 [2024-07-14 21:28:48.471127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.028 [2024-07-14 21:28:48.471188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:29:37.028 [2024-07-14 21:28:48.471217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.444 ms 00:29:37.028 [2024-07-14 21:28:48.471227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.028 [2024-07-14 21:28:48.471271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.028 [2024-07-14 21:28:48.471285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:29:37.028 [2024-07-14 21:28:48.471296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:37.028 [2024-07-14 21:28:48.471305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.028 [2024-07-14 21:28:48.471413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.028 [2024-07-14 21:28:48.471429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:37.028 [2024-07-14 21:28:48.471443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:37.028 [2024-07-14 21:28:48.471453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.028 [2024-07-14 21:28:48.471477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.028 [2024-07-14 21:28:48.471489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:37.028 [2024-07-14 21:28:48.471503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:37.028 [2024-07-14 21:28:48.471513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.028 [2024-07-14 21:28:48.471549] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:37.028 [2024-07-14 21:28:48.471564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.028 [2024-07-14 21:28:48.471574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:37.028 [2024-07-14 21:28:48.471584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:37.028 [2024-07-14 21:28:48.471597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.028 [2024-07-14 21:28:48.471651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.028 [2024-07-14 21:28:48.471664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:37.028 [2024-07-14 21:28:48.471675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:29:37.028 [2024-07-14 21:28:48.471684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.028 [2024-07-14 21:28:48.472933] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1415.553 ms, result 0 00:29:37.028 [2024-07-14 21:28:48.488210] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:37.028 [2024-07-14 21:28:48.504182] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:37.028 [2024-07-14 21:28:48.512368] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:37.028 Validate MD5 checksum, iteration 1 00:29:37.028 21:28:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:37.028 21:28:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:29:37.028 21:28:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:37.028 21:28:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:37.028 21:28:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:29:37.028 21:28:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:37.028 21:28:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:37.028 21:28:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:37.028 21:28:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:37.029 21:28:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:37.029 21:28:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:37.029 21:28:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:37.029 21:28:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:37.029 21:28:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:37.029 21:28:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:37.287 [2024-07-14 21:28:48.670951] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:37.287 [2024-07-14 21:28:48.671368] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86234 ] 00:29:37.545 [2024-07-14 21:28:48.852603] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.545 [2024-07-14 21:28:49.068463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.374  Copying: 494/1024 [MB] (494 MBps) Copying: 964/1024 [MB] (470 MBps) Copying: 1024/1024 [MB] (average 476 MBps) 00:29:42.374 00:29:42.374 21:28:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:42.374 21:28:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:44.273 21:28:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:44.273 21:28:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=0bca3a435c84acf71cc8f53676caa40c 00:29:44.273 Validate MD5 checksum, iteration 2 00:29:44.273 21:28:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 0bca3a435c84acf71cc8f53676caa40c != \0\b\c\a\3\a\4\3\5\c\8\4\a\c\f\7\1\c\c\8\f\5\3\6\7\6\c\a\a\4\0\c ]] 00:29:44.273 21:28:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:44.273 21:28:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:44.273 21:28:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:44.273 21:28:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:44.273 21:28:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:44.273 21:28:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:44.273 21:28:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:44.273 21:28:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:44.273 21:28:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:44.273 [2024-07-14 21:28:55.616341] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:44.273 [2024-07-14 21:28:55.616664] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86307 ] 00:29:44.273 [2024-07-14 21:28:55.775833] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.531 [2024-07-14 21:28:55.980469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.636  Copying: 469/1024 [MB] (469 MBps) Copying: 955/1024 [MB] (486 MBps) Copying: 1024/1024 [MB] (average 477 MBps) 00:29:48.636 00:29:48.636 21:29:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:48.636 21:29:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:50.558 21:29:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:50.558 21:29:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ea0f07d96ffa189c7589148cc2cbb41d 00:29:50.558 21:29:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ea0f07d96ffa189c7589148cc2cbb41d != \e\a\0\f\0\7\d\9\6\f\f\a\1\8\9\c\7\5\8\9\1\4\8\c\c\2\c\b\b\4\1\d ]] 00:29:50.558 21:29:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:50.558 21:29:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:50.558 21:29:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:50.558 21:29:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:50.558 21:29:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:50.558 21:29:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:50.817 21:29:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:50.817 21:29:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:50.817 21:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:50.817 21:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:50.817 21:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 86198 ]] 00:29:50.817 21:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 86198 00:29:50.817 21:29:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 86198 ']' 00:29:50.817 21:29:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 86198 00:29:50.817 21:29:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:29:50.817 21:29:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:50.817 21:29:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86198 00:29:50.817 killing process with pid 86198 00:29:50.817 21:29:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:50.817 21:29:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:50.817 21:29:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86198' 00:29:50.817 21:29:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 86198 00:29:50.817 21:29:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 86198 00:29:51.755 [2024-07-14 21:29:02.985281] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:51.755 [2024-07-14 21:29:03.001401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.755 [2024-07-14 21:29:03.001481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:51.755 [2024-07-14 21:29:03.001516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:51.755 [2024-07-14 21:29:03.001527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.755 [2024-07-14 21:29:03.001571] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:51.755 [2024-07-14 21:29:03.004889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.755 [2024-07-14 21:29:03.004952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:51.755 [2024-07-14 21:29:03.004994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.297 ms 00:29:51.755 [2024-07-14 21:29:03.005006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.755 [2024-07-14 21:29:03.005264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.755 [2024-07-14 21:29:03.005287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:51.755 [2024-07-14 21:29:03.005308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.233 ms 00:29:51.755 [2024-07-14 21:29:03.005318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.755 [2024-07-14 21:29:03.006635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.755 [2024-07-14 21:29:03.006690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:51.755 [2024-07-14 21:29:03.006721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.295 ms 00:29:51.755 [2024-07-14 21:29:03.006732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.755 [2024-07-14 21:29:03.007986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.755 [2024-07-14 21:29:03.008031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:51.755 [2024-07-14 21:29:03.008059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.199 ms 00:29:51.755 [2024-07-14 21:29:03.008077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.755 [2024-07-14 21:29:03.018952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.755 [2024-07-14 21:29:03.018988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:51.755 [2024-07-14 21:29:03.019018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.811 ms 00:29:51.755 [2024-07-14 21:29:03.019028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.755 [2024-07-14 21:29:03.025068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.755 [2024-07-14 21:29:03.025122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:51.755 [2024-07-14 21:29:03.025174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.985 ms 00:29:51.755 [2024-07-14 21:29:03.025185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.755 [2024-07-14 21:29:03.025275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.755 [2024-07-14 21:29:03.025292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:51.755 [2024-07-14 21:29:03.025304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:29:51.755 [2024-07-14 21:29:03.025314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.755 [2024-07-14 21:29:03.036430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.755 [2024-07-14 21:29:03.036480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:29:51.755 [2024-07-14 21:29:03.036509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.097 ms 00:29:51.755 [2024-07-14 21:29:03.036518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.755 [2024-07-14 21:29:03.047510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.755 [2024-07-14 21:29:03.047578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:29:51.755 [2024-07-14 21:29:03.047607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.929 ms 00:29:51.755 [2024-07-14 21:29:03.047617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.755 [2024-07-14 21:29:03.058401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.755 [2024-07-14 21:29:03.058450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:51.755 [2024-07-14 21:29:03.058479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.746 ms 00:29:51.755 [2024-07-14 21:29:03.058489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.755 [2024-07-14 21:29:03.069892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.755 [2024-07-14 21:29:03.069942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:51.755 [2024-07-14 21:29:03.069971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.332 ms 00:29:51.755 [2024-07-14 21:29:03.069980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.755 [2024-07-14 21:29:03.070018] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:51.755 [2024-07-14 21:29:03.070039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:51.755 [2024-07-14 21:29:03.070051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:51.755 [2024-07-14 21:29:03.070062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:51.755 [2024-07-14 21:29:03.070072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:51.755 [2024-07-14 21:29:03.070083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:51.755 [2024-07-14 21:29:03.070093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:51.756 [2024-07-14 21:29:03.070103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:51.756 [2024-07-14 21:29:03.070113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:51.756 [2024-07-14 21:29:03.070123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:51.756 [2024-07-14 21:29:03.070132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:51.756 [2024-07-14 21:29:03.070143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:51.756 [2024-07-14 21:29:03.070153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:51.756 [2024-07-14 21:29:03.070179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:51.756 [2024-07-14 21:29:03.070204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:51.756 [2024-07-14 21:29:03.070231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:51.756 [2024-07-14 21:29:03.070242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:51.756 [2024-07-14 21:29:03.070252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:51.756 [2024-07-14 21:29:03.070263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:51.756 [2024-07-14 21:29:03.070277] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:51.756 [2024-07-14 21:29:03.070302] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 6f306d99-2f3d-4bb6-87a8-3468ef3a06e7 00:29:51.756 [2024-07-14 21:29:03.070318] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:51.756 [2024-07-14 21:29:03.070332] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:51.756 [2024-07-14 21:29:03.070343] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:51.756 [2024-07-14 21:29:03.070353] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:51.756 [2024-07-14 21:29:03.070363] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:51.756 [2024-07-14 21:29:03.070374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:51.756 [2024-07-14 21:29:03.070385] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:51.756 [2024-07-14 21:29:03.070395] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:51.756 [2024-07-14 21:29:03.070404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:51.756 [2024-07-14 21:29:03.070415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.756 [2024-07-14 21:29:03.070425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:51.756 [2024-07-14 21:29:03.070438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.399 ms 00:29:51.756 [2024-07-14 21:29:03.070449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.756 [2024-07-14 21:29:03.084470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.756 [2024-07-14 21:29:03.084520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:51.756 [2024-07-14 21:29:03.084575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.998 ms 00:29:51.756 [2024-07-14 21:29:03.084586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.756 [2024-07-14 21:29:03.085041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.756 [2024-07-14 21:29:03.085074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:51.756 [2024-07-14 21:29:03.085088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.417 ms 00:29:51.756 [2024-07-14 21:29:03.085098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.756 [2024-07-14 21:29:03.128145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.756 [2024-07-14 21:29:03.128206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:51.756 [2024-07-14 21:29:03.128237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.756 [2024-07-14 21:29:03.128247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.756 [2024-07-14 21:29:03.128287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.756 [2024-07-14 21:29:03.128301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:51.756 [2024-07-14 21:29:03.128311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.756 [2024-07-14 21:29:03.128321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.756 [2024-07-14 21:29:03.128413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.756 [2024-07-14 21:29:03.128430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:51.756 [2024-07-14 21:29:03.128457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.756 [2024-07-14 21:29:03.128483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.756 [2024-07-14 21:29:03.128522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.756 [2024-07-14 21:29:03.128535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:51.756 [2024-07-14 21:29:03.128557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.756 [2024-07-14 21:29:03.128586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.756 [2024-07-14 21:29:03.213694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.756 [2024-07-14 21:29:03.213773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:51.756 [2024-07-14 21:29:03.213806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.756 [2024-07-14 21:29:03.213827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.756 [2024-07-14 21:29:03.285517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.756 [2024-07-14 21:29:03.285592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:51.756 [2024-07-14 21:29:03.285625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.756 [2024-07-14 21:29:03.285637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.756 [2024-07-14 21:29:03.285743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.756 [2024-07-14 21:29:03.285761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:51.756 [2024-07-14 21:29:03.285773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.756 [2024-07-14 21:29:03.285784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.756 [2024-07-14 21:29:03.285854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.756 [2024-07-14 21:29:03.285888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:51.756 [2024-07-14 21:29:03.285916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.756 [2024-07-14 21:29:03.285926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.756 [2024-07-14 21:29:03.286060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.756 [2024-07-14 21:29:03.286085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:51.756 [2024-07-14 21:29:03.286098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.756 [2024-07-14 21:29:03.286109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.756 [2024-07-14 21:29:03.286164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.756 [2024-07-14 21:29:03.286182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:51.756 [2024-07-14 21:29:03.286195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.756 [2024-07-14 21:29:03.286207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.756 [2024-07-14 21:29:03.286253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.756 [2024-07-14 21:29:03.286275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:51.756 [2024-07-14 21:29:03.286288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.756 [2024-07-14 21:29:03.286314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.756 [2024-07-14 21:29:03.286366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.756 [2024-07-14 21:29:03.286383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:51.756 [2024-07-14 21:29:03.286394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.756 [2024-07-14 21:29:03.286406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.756 [2024-07-14 21:29:03.286545] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 285.112 ms, result 0 00:29:53.130 21:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:53.130 21:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:53.130 21:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:53.130 21:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:53.130 21:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:53.130 21:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:53.130 Remove shared memory files 00:29:53.130 21:29:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:53.130 21:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:53.130 21:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:53.130 21:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:53.130 21:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid85985 00:29:53.130 21:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:53.130 21:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:53.130 ************************************ 00:29:53.130 END TEST ftl_upgrade_shutdown 00:29:53.130 ************************************ 00:29:53.130 00:29:53.130 real 1m31.900s 00:29:53.130 user 2m11.986s 00:29:53.130 sys 0m22.211s 00:29:53.131 21:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:53.131 21:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:53.131 21:29:04 ftl -- common/autotest_common.sh@1142 -- # return 0 00:29:53.131 21:29:04 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:29:53.131 21:29:04 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:53.131 21:29:04 ftl -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:29:53.131 21:29:04 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:53.131 21:29:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:53.131 ************************************ 00:29:53.131 START TEST ftl_restore_fast 00:29:53.131 ************************************ 00:29:53.131 21:29:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:53.389 * Looking for test storage... 00:29:53.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:53.389 21:29:04 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:53.389 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:29:53.389 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:53.389 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:53.389 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:53.389 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:53.389 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:53.389 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:53.389 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.9FFxgacDPJ 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=86464 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 86464 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- common/autotest_common.sh@829 -- # '[' -z 86464 ']' 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:53.390 21:29:04 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:29:53.390 [2024-07-14 21:29:04.905986] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:53.390 [2024-07-14 21:29:04.906177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86464 ] 00:29:53.648 [2024-07-14 21:29:05.077468] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.907 [2024-07-14 21:29:05.233211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.475 21:29:05 ftl.ftl_restore_fast -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:54.475 21:29:05 ftl.ftl_restore_fast -- common/autotest_common.sh@862 -- # return 0 00:29:54.475 21:29:05 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:54.475 21:29:05 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:29:54.475 21:29:05 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:54.475 21:29:05 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:29:54.475 21:29:05 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:29:54.475 21:29:05 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:54.734 21:29:06 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:54.734 21:29:06 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:29:54.734 21:29:06 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:54.734 21:29:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:29:54.734 21:29:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:54.734 21:29:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:29:54.734 21:29:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:29:54.734 21:29:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:54.995 21:29:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:54.995 { 00:29:54.995 "name": "nvme0n1", 00:29:54.995 "aliases": [ 00:29:54.995 "a9b418d6-969d-480a-a92b-543db889073a" 00:29:54.995 ], 00:29:54.995 "product_name": "NVMe disk", 00:29:54.995 "block_size": 4096, 00:29:54.995 "num_blocks": 1310720, 00:29:54.995 "uuid": "a9b418d6-969d-480a-a92b-543db889073a", 00:29:54.995 "assigned_rate_limits": { 00:29:54.995 "rw_ios_per_sec": 0, 00:29:54.995 "rw_mbytes_per_sec": 0, 00:29:54.995 "r_mbytes_per_sec": 0, 00:29:54.995 "w_mbytes_per_sec": 0 00:29:54.995 }, 00:29:54.995 "claimed": true, 00:29:54.995 "claim_type": "read_many_write_one", 00:29:54.995 "zoned": false, 00:29:54.995 "supported_io_types": { 00:29:54.995 "read": true, 00:29:54.995 "write": true, 00:29:54.995 "unmap": true, 00:29:54.995 "flush": true, 00:29:54.995 "reset": true, 00:29:54.995 "nvme_admin": true, 00:29:54.995 "nvme_io": true, 00:29:54.995 "nvme_io_md": false, 00:29:54.995 "write_zeroes": true, 00:29:54.995 "zcopy": false, 00:29:54.995 "get_zone_info": false, 00:29:54.995 "zone_management": false, 00:29:54.995 "zone_append": false, 00:29:54.995 "compare": true, 00:29:54.995 "compare_and_write": false, 00:29:54.995 "abort": true, 00:29:54.995 "seek_hole": false, 00:29:54.995 "seek_data": false, 00:29:54.995 "copy": true, 00:29:54.995 "nvme_iov_md": false 00:29:54.995 }, 00:29:54.995 "driver_specific": { 00:29:54.995 "nvme": [ 00:29:54.995 { 00:29:54.995 "pci_address": "0000:00:11.0", 00:29:54.995 "trid": { 00:29:54.995 "trtype": "PCIe", 00:29:54.995 "traddr": "0000:00:11.0" 00:29:54.995 }, 00:29:54.995 "ctrlr_data": { 00:29:54.995 "cntlid": 0, 00:29:54.995 "vendor_id": "0x1b36", 00:29:54.995 "model_number": "QEMU NVMe Ctrl", 00:29:54.995 "serial_number": "12341", 00:29:54.995 "firmware_revision": "8.0.0", 00:29:54.995 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:54.995 "oacs": { 00:29:54.995 "security": 0, 00:29:54.995 "format": 1, 00:29:54.995 "firmware": 0, 00:29:54.995 "ns_manage": 1 00:29:54.995 }, 00:29:54.995 "multi_ctrlr": false, 00:29:54.995 "ana_reporting": false 00:29:54.995 }, 00:29:54.995 "vs": { 00:29:54.995 "nvme_version": "1.4" 00:29:54.995 }, 00:29:54.995 "ns_data": { 00:29:54.995 "id": 1, 00:29:54.995 "can_share": false 00:29:54.995 } 00:29:54.995 } 00:29:54.995 ], 00:29:54.995 "mp_policy": "active_passive" 00:29:54.995 } 00:29:54.995 } 00:29:54.995 ]' 00:29:54.995 21:29:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:54.995 21:29:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:29:54.995 21:29:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:54.995 21:29:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:29:54.995 21:29:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:29:54.995 21:29:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:29:54.995 21:29:06 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:29:54.995 21:29:06 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:54.995 21:29:06 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:29:54.995 21:29:06 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:54.995 21:29:06 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:55.254 21:29:06 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=0a3dbff0-eb6a-4581-be15-750e74447b8c 00:29:55.254 21:29:06 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:29:55.254 21:29:06 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0a3dbff0-eb6a-4581-be15-750e74447b8c 00:29:55.512 21:29:07 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:55.771 21:29:07 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=49cf1a6a-a117-46fd-b79d-9ee9d265f514 00:29:55.771 21:29:07 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 49cf1a6a-a117-46fd-b79d-9ee9d265f514 00:29:56.030 21:29:07 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=76d66755-b5db-4dca-a694-f09cd7f8779e 00:29:56.030 21:29:07 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:29:56.030 21:29:07 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 76d66755-b5db-4dca-a694-f09cd7f8779e 00:29:56.030 21:29:07 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:29:56.030 21:29:07 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:56.030 21:29:07 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=76d66755-b5db-4dca-a694-f09cd7f8779e 00:29:56.030 21:29:07 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:29:56.030 21:29:07 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 76d66755-b5db-4dca-a694-f09cd7f8779e 00:29:56.030 21:29:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=76d66755-b5db-4dca-a694-f09cd7f8779e 00:29:56.030 21:29:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:56.030 21:29:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:29:56.289 21:29:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:29:56.289 21:29:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 76d66755-b5db-4dca-a694-f09cd7f8779e 00:29:56.547 21:29:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:56.547 { 00:29:56.547 "name": "76d66755-b5db-4dca-a694-f09cd7f8779e", 00:29:56.547 "aliases": [ 00:29:56.547 "lvs/nvme0n1p0" 00:29:56.547 ], 00:29:56.547 "product_name": "Logical Volume", 00:29:56.547 "block_size": 4096, 00:29:56.547 "num_blocks": 26476544, 00:29:56.547 "uuid": "76d66755-b5db-4dca-a694-f09cd7f8779e", 00:29:56.547 "assigned_rate_limits": { 00:29:56.547 "rw_ios_per_sec": 0, 00:29:56.547 "rw_mbytes_per_sec": 0, 00:29:56.547 "r_mbytes_per_sec": 0, 00:29:56.547 "w_mbytes_per_sec": 0 00:29:56.547 }, 00:29:56.547 "claimed": false, 00:29:56.547 "zoned": false, 00:29:56.547 "supported_io_types": { 00:29:56.547 "read": true, 00:29:56.547 "write": true, 00:29:56.547 "unmap": true, 00:29:56.547 "flush": false, 00:29:56.547 "reset": true, 00:29:56.547 "nvme_admin": false, 00:29:56.547 "nvme_io": false, 00:29:56.547 "nvme_io_md": false, 00:29:56.547 "write_zeroes": true, 00:29:56.547 "zcopy": false, 00:29:56.547 "get_zone_info": false, 00:29:56.547 "zone_management": false, 00:29:56.547 "zone_append": false, 00:29:56.547 "compare": false, 00:29:56.547 "compare_and_write": false, 00:29:56.547 "abort": false, 00:29:56.547 "seek_hole": true, 00:29:56.547 "seek_data": true, 00:29:56.547 "copy": false, 00:29:56.547 "nvme_iov_md": false 00:29:56.547 }, 00:29:56.547 "driver_specific": { 00:29:56.547 "lvol": { 00:29:56.547 "lvol_store_uuid": "49cf1a6a-a117-46fd-b79d-9ee9d265f514", 00:29:56.547 "base_bdev": "nvme0n1", 00:29:56.547 "thin_provision": true, 00:29:56.547 "num_allocated_clusters": 0, 00:29:56.547 "snapshot": false, 00:29:56.547 "clone": false, 00:29:56.547 "esnap_clone": false 00:29:56.547 } 00:29:56.547 } 00:29:56.547 } 00:29:56.547 ]' 00:29:56.547 21:29:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:56.547 21:29:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:29:56.547 21:29:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:56.547 21:29:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:29:56.547 21:29:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:29:56.547 21:29:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:29:56.547 21:29:07 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:29:56.547 21:29:07 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:29:56.548 21:29:07 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:56.806 21:29:08 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:56.806 21:29:08 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:56.806 21:29:08 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 76d66755-b5db-4dca-a694-f09cd7f8779e 00:29:56.806 21:29:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=76d66755-b5db-4dca-a694-f09cd7f8779e 00:29:56.806 21:29:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:56.806 21:29:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:29:56.806 21:29:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:29:56.806 21:29:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 76d66755-b5db-4dca-a694-f09cd7f8779e 00:29:57.065 21:29:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:57.065 { 00:29:57.065 "name": "76d66755-b5db-4dca-a694-f09cd7f8779e", 00:29:57.065 "aliases": [ 00:29:57.065 "lvs/nvme0n1p0" 00:29:57.065 ], 00:29:57.065 "product_name": "Logical Volume", 00:29:57.065 "block_size": 4096, 00:29:57.065 "num_blocks": 26476544, 00:29:57.065 "uuid": "76d66755-b5db-4dca-a694-f09cd7f8779e", 00:29:57.066 "assigned_rate_limits": { 00:29:57.066 "rw_ios_per_sec": 0, 00:29:57.066 "rw_mbytes_per_sec": 0, 00:29:57.066 "r_mbytes_per_sec": 0, 00:29:57.066 "w_mbytes_per_sec": 0 00:29:57.066 }, 00:29:57.066 "claimed": false, 00:29:57.066 "zoned": false, 00:29:57.066 "supported_io_types": { 00:29:57.066 "read": true, 00:29:57.066 "write": true, 00:29:57.066 "unmap": true, 00:29:57.066 "flush": false, 00:29:57.066 "reset": true, 00:29:57.066 "nvme_admin": false, 00:29:57.066 "nvme_io": false, 00:29:57.066 "nvme_io_md": false, 00:29:57.066 "write_zeroes": true, 00:29:57.066 "zcopy": false, 00:29:57.066 "get_zone_info": false, 00:29:57.066 "zone_management": false, 00:29:57.066 "zone_append": false, 00:29:57.066 "compare": false, 00:29:57.066 "compare_and_write": false, 00:29:57.066 "abort": false, 00:29:57.066 "seek_hole": true, 00:29:57.066 "seek_data": true, 00:29:57.066 "copy": false, 00:29:57.066 "nvme_iov_md": false 00:29:57.066 }, 00:29:57.066 "driver_specific": { 00:29:57.066 "lvol": { 00:29:57.066 "lvol_store_uuid": "49cf1a6a-a117-46fd-b79d-9ee9d265f514", 00:29:57.066 "base_bdev": "nvme0n1", 00:29:57.066 "thin_provision": true, 00:29:57.066 "num_allocated_clusters": 0, 00:29:57.066 "snapshot": false, 00:29:57.066 "clone": false, 00:29:57.066 "esnap_clone": false 00:29:57.066 } 00:29:57.066 } 00:29:57.066 } 00:29:57.066 ]' 00:29:57.066 21:29:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:57.066 21:29:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:29:57.066 21:29:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:57.066 21:29:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:29:57.066 21:29:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:29:57.066 21:29:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:29:57.066 21:29:08 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:29:57.066 21:29:08 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:57.325 21:29:08 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:29:57.325 21:29:08 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 76d66755-b5db-4dca-a694-f09cd7f8779e 00:29:57.325 21:29:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=76d66755-b5db-4dca-a694-f09cd7f8779e 00:29:57.325 21:29:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:57.325 21:29:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:29:57.325 21:29:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:29:57.325 21:29:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 76d66755-b5db-4dca-a694-f09cd7f8779e 00:29:57.584 21:29:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:57.584 { 00:29:57.584 "name": "76d66755-b5db-4dca-a694-f09cd7f8779e", 00:29:57.584 "aliases": [ 00:29:57.584 "lvs/nvme0n1p0" 00:29:57.584 ], 00:29:57.584 "product_name": "Logical Volume", 00:29:57.584 "block_size": 4096, 00:29:57.584 "num_blocks": 26476544, 00:29:57.584 "uuid": "76d66755-b5db-4dca-a694-f09cd7f8779e", 00:29:57.584 "assigned_rate_limits": { 00:29:57.584 "rw_ios_per_sec": 0, 00:29:57.584 "rw_mbytes_per_sec": 0, 00:29:57.584 "r_mbytes_per_sec": 0, 00:29:57.584 "w_mbytes_per_sec": 0 00:29:57.584 }, 00:29:57.584 "claimed": false, 00:29:57.584 "zoned": false, 00:29:57.584 "supported_io_types": { 00:29:57.584 "read": true, 00:29:57.584 "write": true, 00:29:57.584 "unmap": true, 00:29:57.584 "flush": false, 00:29:57.584 "reset": true, 00:29:57.584 "nvme_admin": false, 00:29:57.584 "nvme_io": false, 00:29:57.584 "nvme_io_md": false, 00:29:57.584 "write_zeroes": true, 00:29:57.584 "zcopy": false, 00:29:57.584 "get_zone_info": false, 00:29:57.584 "zone_management": false, 00:29:57.584 "zone_append": false, 00:29:57.584 "compare": false, 00:29:57.584 "compare_and_write": false, 00:29:57.584 "abort": false, 00:29:57.584 "seek_hole": true, 00:29:57.584 "seek_data": true, 00:29:57.584 "copy": false, 00:29:57.584 "nvme_iov_md": false 00:29:57.584 }, 00:29:57.584 "driver_specific": { 00:29:57.584 "lvol": { 00:29:57.584 "lvol_store_uuid": "49cf1a6a-a117-46fd-b79d-9ee9d265f514", 00:29:57.584 "base_bdev": "nvme0n1", 00:29:57.584 "thin_provision": true, 00:29:57.584 "num_allocated_clusters": 0, 00:29:57.584 "snapshot": false, 00:29:57.584 "clone": false, 00:29:57.584 "esnap_clone": false 00:29:57.584 } 00:29:57.584 } 00:29:57.584 } 00:29:57.584 ]' 00:29:57.584 21:29:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:57.584 21:29:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:29:57.584 21:29:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:57.584 21:29:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:29:57.584 21:29:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:29:57.584 21:29:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:29:57.584 21:29:09 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:29:57.584 21:29:09 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 76d66755-b5db-4dca-a694-f09cd7f8779e --l2p_dram_limit 10' 00:29:57.584 21:29:09 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:29:57.584 21:29:09 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:29:57.584 21:29:09 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:57.585 21:29:09 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:29:57.585 21:29:09 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:29:57.585 21:29:09 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 76d66755-b5db-4dca-a694-f09cd7f8779e --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:29:57.845 [2024-07-14 21:29:09.277863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.845 [2024-07-14 21:29:09.277945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:57.845 [2024-07-14 21:29:09.277965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:57.845 [2024-07-14 21:29:09.277977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.845 [2024-07-14 21:29:09.278050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.845 [2024-07-14 21:29:09.278070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:57.845 [2024-07-14 21:29:09.278081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:29:57.845 [2024-07-14 21:29:09.278093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.845 [2024-07-14 21:29:09.278118] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:57.845 [2024-07-14 21:29:09.278988] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:57.845 [2024-07-14 21:29:09.279015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.845 [2024-07-14 21:29:09.279032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:57.845 [2024-07-14 21:29:09.279044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.903 ms 00:29:57.845 [2024-07-14 21:29:09.279056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.845 [2024-07-14 21:29:09.279189] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0deb0b22-1569-42dd-9079-fdc56cfdd0ab 00:29:57.845 [2024-07-14 21:29:09.280197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.845 [2024-07-14 21:29:09.280234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:57.845 [2024-07-14 21:29:09.280252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:57.845 [2024-07-14 21:29:09.280262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.845 [2024-07-14 21:29:09.284473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.845 [2024-07-14 21:29:09.284510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:57.845 [2024-07-14 21:29:09.284546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.147 ms 00:29:57.845 [2024-07-14 21:29:09.284597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.845 [2024-07-14 21:29:09.284706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.845 [2024-07-14 21:29:09.284724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:57.845 [2024-07-14 21:29:09.284737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:29:57.845 [2024-07-14 21:29:09.284748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.845 [2024-07-14 21:29:09.284846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.845 [2024-07-14 21:29:09.284882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:57.845 [2024-07-14 21:29:09.284911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:57.845 [2024-07-14 21:29:09.284924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.845 [2024-07-14 21:29:09.284972] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:57.845 [2024-07-14 21:29:09.288910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.845 [2024-07-14 21:29:09.288980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:57.845 [2024-07-14 21:29:09.288995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.963 ms 00:29:57.845 [2024-07-14 21:29:09.289008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.845 [2024-07-14 21:29:09.289049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.845 [2024-07-14 21:29:09.289066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:57.845 [2024-07-14 21:29:09.289077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:57.845 [2024-07-14 21:29:09.289088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.845 [2024-07-14 21:29:09.289127] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:57.845 [2024-07-14 21:29:09.289277] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:57.845 [2024-07-14 21:29:09.289293] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:57.845 [2024-07-14 21:29:09.289309] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:57.845 [2024-07-14 21:29:09.289322] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:57.845 [2024-07-14 21:29:09.289335] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:57.845 [2024-07-14 21:29:09.289346] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:57.845 [2024-07-14 21:29:09.289357] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:57.845 [2024-07-14 21:29:09.289368] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:57.845 [2024-07-14 21:29:09.289380] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:57.845 [2024-07-14 21:29:09.289390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.845 [2024-07-14 21:29:09.289401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:57.845 [2024-07-14 21:29:09.289411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:29:57.845 [2024-07-14 21:29:09.289423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.845 [2024-07-14 21:29:09.289499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.845 [2024-07-14 21:29:09.289514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:57.845 [2024-07-14 21:29:09.289524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:29:57.845 [2024-07-14 21:29:09.289535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.845 [2024-07-14 21:29:09.289625] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:57.845 [2024-07-14 21:29:09.289644] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:57.845 [2024-07-14 21:29:09.289665] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:57.845 [2024-07-14 21:29:09.289678] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.845 [2024-07-14 21:29:09.289688] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:57.845 [2024-07-14 21:29:09.289698] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:57.845 [2024-07-14 21:29:09.289707] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:57.845 [2024-07-14 21:29:09.289718] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:57.845 [2024-07-14 21:29:09.289727] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:57.845 [2024-07-14 21:29:09.289738] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:57.845 [2024-07-14 21:29:09.289747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:57.845 [2024-07-14 21:29:09.289758] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:57.845 [2024-07-14 21:29:09.289767] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:57.845 [2024-07-14 21:29:09.289779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:57.845 [2024-07-14 21:29:09.289788] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:57.845 [2024-07-14 21:29:09.289799] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.845 [2024-07-14 21:29:09.289809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:57.845 [2024-07-14 21:29:09.289821] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:57.845 [2024-07-14 21:29:09.289868] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.845 [2024-07-14 21:29:09.289882] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:57.845 [2024-07-14 21:29:09.289891] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:57.845 [2024-07-14 21:29:09.289902] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:57.845 [2024-07-14 21:29:09.289911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:57.845 [2024-07-14 21:29:09.289921] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:57.845 [2024-07-14 21:29:09.289930] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:57.846 [2024-07-14 21:29:09.289941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:57.846 [2024-07-14 21:29:09.289965] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:57.846 [2024-07-14 21:29:09.289976] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:57.846 [2024-07-14 21:29:09.289985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:57.846 [2024-07-14 21:29:09.289996] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:57.846 [2024-07-14 21:29:09.290005] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:57.846 [2024-07-14 21:29:09.290016] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:57.846 [2024-07-14 21:29:09.290025] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:57.846 [2024-07-14 21:29:09.290037] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:57.846 [2024-07-14 21:29:09.290050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:57.846 [2024-07-14 21:29:09.290061] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:57.846 [2024-07-14 21:29:09.290070] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:57.846 [2024-07-14 21:29:09.290080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:57.846 [2024-07-14 21:29:09.290089] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:57.846 [2024-07-14 21:29:09.290102] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.846 [2024-07-14 21:29:09.290111] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:57.846 [2024-07-14 21:29:09.290122] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:57.846 [2024-07-14 21:29:09.290131] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.846 [2024-07-14 21:29:09.290142] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:57.846 [2024-07-14 21:29:09.290152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:57.846 [2024-07-14 21:29:09.290163] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:57.846 [2024-07-14 21:29:09.290173] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.846 [2024-07-14 21:29:09.290185] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:57.846 [2024-07-14 21:29:09.290196] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:57.846 [2024-07-14 21:29:09.290209] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:57.846 [2024-07-14 21:29:09.290234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:57.846 [2024-07-14 21:29:09.290246] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:57.846 [2024-07-14 21:29:09.290255] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:57.846 [2024-07-14 21:29:09.290285] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:57.846 [2024-07-14 21:29:09.290313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:57.846 [2024-07-14 21:29:09.290345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:57.846 [2024-07-14 21:29:09.290356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:57.846 [2024-07-14 21:29:09.290369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:57.846 [2024-07-14 21:29:09.290379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:57.846 [2024-07-14 21:29:09.290392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:57.846 [2024-07-14 21:29:09.290402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:57.846 [2024-07-14 21:29:09.290415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:57.846 [2024-07-14 21:29:09.290426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:57.846 [2024-07-14 21:29:09.290439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:57.846 [2024-07-14 21:29:09.290450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:57.846 [2024-07-14 21:29:09.290464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:57.846 [2024-07-14 21:29:09.290475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:57.846 [2024-07-14 21:29:09.290487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:57.846 [2024-07-14 21:29:09.290498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:57.846 [2024-07-14 21:29:09.290510] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:57.846 [2024-07-14 21:29:09.290522] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:57.846 [2024-07-14 21:29:09.290535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:57.846 [2024-07-14 21:29:09.290546] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:57.846 [2024-07-14 21:29:09.290559] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:57.846 [2024-07-14 21:29:09.290570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:57.846 [2024-07-14 21:29:09.290583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.846 [2024-07-14 21:29:09.290594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:57.846 [2024-07-14 21:29:09.290607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:29:57.846 [2024-07-14 21:29:09.290617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.846 [2024-07-14 21:29:09.290683] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:57.846 [2024-07-14 21:29:09.290707] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:30:00.375 [2024-07-14 21:29:11.456140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.375 [2024-07-14 21:29:11.456223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:30:00.375 [2024-07-14 21:29:11.456261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2165.461 ms 00:30:00.375 [2024-07-14 21:29:11.456272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.375 [2024-07-14 21:29:11.482972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.375 [2024-07-14 21:29:11.483027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:00.375 [2024-07-14 21:29:11.483064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.487 ms 00:30:00.375 [2024-07-14 21:29:11.483075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.375 [2024-07-14 21:29:11.483247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.375 [2024-07-14 21:29:11.483263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:00.375 [2024-07-14 21:29:11.483276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:30:00.375 [2024-07-14 21:29:11.483288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.375 [2024-07-14 21:29:11.515288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.375 [2024-07-14 21:29:11.515334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:00.375 [2024-07-14 21:29:11.515369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.953 ms 00:30:00.375 [2024-07-14 21:29:11.515379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.375 [2024-07-14 21:29:11.515424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.375 [2024-07-14 21:29:11.515443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:00.375 [2024-07-14 21:29:11.515455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:00.375 [2024-07-14 21:29:11.515465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.375 [2024-07-14 21:29:11.515796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.375 [2024-07-14 21:29:11.515862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:00.375 [2024-07-14 21:29:11.515879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:30:00.375 [2024-07-14 21:29:11.515890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.375 [2024-07-14 21:29:11.516021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.375 [2024-07-14 21:29:11.516037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:00.375 [2024-07-14 21:29:11.516053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:30:00.375 [2024-07-14 21:29:11.516063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.375 [2024-07-14 21:29:11.531177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.375 [2024-07-14 21:29:11.531214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:00.375 [2024-07-14 21:29:11.531248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.057 ms 00:30:00.375 [2024-07-14 21:29:11.531259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.375 [2024-07-14 21:29:11.542713] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:00.375 [2024-07-14 21:29:11.545358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.375 [2024-07-14 21:29:11.545392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:00.375 [2024-07-14 21:29:11.545423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.010 ms 00:30:00.375 [2024-07-14 21:29:11.545435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.375 [2024-07-14 21:29:11.614680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.375 [2024-07-14 21:29:11.614750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:30:00.375 [2024-07-14 21:29:11.614785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.211 ms 00:30:00.375 [2024-07-14 21:29:11.614798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.375 [2024-07-14 21:29:11.615034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.375 [2024-07-14 21:29:11.615059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:00.375 [2024-07-14 21:29:11.615103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:30:00.376 [2024-07-14 21:29:11.615146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.376 [2024-07-14 21:29:11.641926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.376 [2024-07-14 21:29:11.641966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:30:00.376 [2024-07-14 21:29:11.641998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.705 ms 00:30:00.376 [2024-07-14 21:29:11.642010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.376 [2024-07-14 21:29:11.670284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.376 [2024-07-14 21:29:11.670327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:30:00.376 [2024-07-14 21:29:11.670343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.228 ms 00:30:00.376 [2024-07-14 21:29:11.670354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.376 [2024-07-14 21:29:11.671024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.376 [2024-07-14 21:29:11.671059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:00.376 [2024-07-14 21:29:11.671074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.629 ms 00:30:00.376 [2024-07-14 21:29:11.671090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.376 [2024-07-14 21:29:11.753573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.376 [2024-07-14 21:29:11.753652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:30:00.376 [2024-07-14 21:29:11.753671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.424 ms 00:30:00.376 [2024-07-14 21:29:11.753687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.376 [2024-07-14 21:29:11.784930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.376 [2024-07-14 21:29:11.784994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:30:00.376 [2024-07-14 21:29:11.785013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.193 ms 00:30:00.376 [2024-07-14 21:29:11.785027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.376 [2024-07-14 21:29:11.814026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.376 [2024-07-14 21:29:11.814086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:30:00.376 [2024-07-14 21:29:11.814103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.949 ms 00:30:00.376 [2024-07-14 21:29:11.814115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.376 [2024-07-14 21:29:11.841534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.376 [2024-07-14 21:29:11.841575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:00.376 [2024-07-14 21:29:11.841606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.375 ms 00:30:00.376 [2024-07-14 21:29:11.841618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.376 [2024-07-14 21:29:11.841675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.376 [2024-07-14 21:29:11.841695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:00.376 [2024-07-14 21:29:11.841706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:30:00.376 [2024-07-14 21:29:11.841720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.376 [2024-07-14 21:29:11.841866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.376 [2024-07-14 21:29:11.841889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:00.376 [2024-07-14 21:29:11.841904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:30:00.376 [2024-07-14 21:29:11.841916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.376 [2024-07-14 21:29:11.843096] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2564.689 ms, result 0 00:30:00.376 { 00:30:00.376 "name": "ftl0", 00:30:00.376 "uuid": "0deb0b22-1569-42dd-9079-fdc56cfdd0ab" 00:30:00.376 } 00:30:00.376 21:29:11 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:30:00.376 21:29:11 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:30:00.634 21:29:12 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:30:00.634 21:29:12 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:00.891 [2024-07-14 21:29:12.406432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.892 [2024-07-14 21:29:12.406694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:00.892 [2024-07-14 21:29:12.406832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:00.892 [2024-07-14 21:29:12.406884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.892 [2024-07-14 21:29:12.406956] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:00.892 [2024-07-14 21:29:12.410000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.892 [2024-07-14 21:29:12.410187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:00.892 [2024-07-14 21:29:12.410317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.892 ms 00:30:00.892 [2024-07-14 21:29:12.410366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.892 [2024-07-14 21:29:12.410672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.892 [2024-07-14 21:29:12.410817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:00.892 [2024-07-14 21:29:12.410939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:30:00.892 [2024-07-14 21:29:12.410992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.892 [2024-07-14 21:29:12.413855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.892 [2024-07-14 21:29:12.414037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:00.892 [2024-07-14 21:29:12.414166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.761 ms 00:30:00.892 [2024-07-14 21:29:12.414216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.892 [2024-07-14 21:29:12.419851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.892 [2024-07-14 21:29:12.420025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:00.892 [2024-07-14 21:29:12.420127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.583 ms 00:30:00.892 [2024-07-14 21:29:12.420175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.150 [2024-07-14 21:29:12.446657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.150 [2024-07-14 21:29:12.446713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:01.150 [2024-07-14 21:29:12.446729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.310 ms 00:30:01.150 [2024-07-14 21:29:12.446741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.150 [2024-07-14 21:29:12.462719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.150 [2024-07-14 21:29:12.462776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:01.150 [2024-07-14 21:29:12.462792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.936 ms 00:30:01.150 [2024-07-14 21:29:12.462804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.150 [2024-07-14 21:29:12.463008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.150 [2024-07-14 21:29:12.463031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:01.150 [2024-07-14 21:29:12.463046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:30:01.150 [2024-07-14 21:29:12.463058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.150 [2024-07-14 21:29:12.488604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.150 [2024-07-14 21:29:12.488660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:30:01.150 [2024-07-14 21:29:12.488676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.525 ms 00:30:01.150 [2024-07-14 21:29:12.488687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.150 [2024-07-14 21:29:12.513774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.150 [2024-07-14 21:29:12.513869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:30:01.150 [2024-07-14 21:29:12.513886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.043 ms 00:30:01.150 [2024-07-14 21:29:12.513898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.150 [2024-07-14 21:29:12.538372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.150 [2024-07-14 21:29:12.538429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:01.150 [2024-07-14 21:29:12.538443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.433 ms 00:30:01.150 [2024-07-14 21:29:12.538454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.150 [2024-07-14 21:29:12.563102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.150 [2024-07-14 21:29:12.563155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:01.150 [2024-07-14 21:29:12.563171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.568 ms 00:30:01.150 [2024-07-14 21:29:12.563182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.150 [2024-07-14 21:29:12.563221] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:01.150 [2024-07-14 21:29:12.563245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:01.150 [2024-07-14 21:29:12.563257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:01.150 [2024-07-14 21:29:12.563269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:01.150 [2024-07-14 21:29:12.563279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:01.150 [2024-07-14 21:29:12.563290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:01.150 [2024-07-14 21:29:12.563300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:01.150 [2024-07-14 21:29:12.563311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:01.150 [2024-07-14 21:29:12.563321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:01.150 [2024-07-14 21:29:12.563334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:01.150 [2024-07-14 21:29:12.563344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:01.150 [2024-07-14 21:29:12.563355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:01.150 [2024-07-14 21:29:12.563365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:01.150 [2024-07-14 21:29:12.563376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:01.150 [2024-07-14 21:29:12.563387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:01.150 [2024-07-14 21:29:12.563398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:01.150 [2024-07-14 21:29:12.563408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:01.150 [2024-07-14 21:29:12.563419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:01.150 [2024-07-14 21:29:12.563429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:01.150 [2024-07-14 21:29:12.563440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:01.150 [2024-07-14 21:29:12.563450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:01.150 [2024-07-14 21:29:12.563463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:01.150 [2024-07-14 21:29:12.563473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:01.150 [2024-07-14 21:29:12.563484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:01.150 [2024-07-14 21:29:12.563494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:01.150 [2024-07-14 21:29:12.563507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.563996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:01.151 [2024-07-14 21:29:12.564533] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:01.151 [2024-07-14 21:29:12.564547] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0deb0b22-1569-42dd-9079-fdc56cfdd0ab 00:30:01.151 [2024-07-14 21:29:12.564586] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:01.151 [2024-07-14 21:29:12.564596] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:01.151 [2024-07-14 21:29:12.564610] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:01.151 [2024-07-14 21:29:12.564621] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:01.151 [2024-07-14 21:29:12.564642] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:01.151 [2024-07-14 21:29:12.564654] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:01.151 [2024-07-14 21:29:12.564666] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:01.151 [2024-07-14 21:29:12.564675] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:01.151 [2024-07-14 21:29:12.564686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:01.151 [2024-07-14 21:29:12.564697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.151 [2024-07-14 21:29:12.564709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:01.151 [2024-07-14 21:29:12.564721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.477 ms 00:30:01.151 [2024-07-14 21:29:12.564734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.151 [2024-07-14 21:29:12.578405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.151 [2024-07-14 21:29:12.578457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:01.151 [2024-07-14 21:29:12.578472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.626 ms 00:30:01.151 [2024-07-14 21:29:12.578484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.151 [2024-07-14 21:29:12.578881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.151 [2024-07-14 21:29:12.578904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:01.151 [2024-07-14 21:29:12.578916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:30:01.151 [2024-07-14 21:29:12.578945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.151 [2024-07-14 21:29:12.619682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.151 [2024-07-14 21:29:12.619741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:01.152 [2024-07-14 21:29:12.619756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.152 [2024-07-14 21:29:12.619767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.152 [2024-07-14 21:29:12.619854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.152 [2024-07-14 21:29:12.619872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:01.152 [2024-07-14 21:29:12.619882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.152 [2024-07-14 21:29:12.619896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.152 [2024-07-14 21:29:12.619995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.152 [2024-07-14 21:29:12.620017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:01.152 [2024-07-14 21:29:12.620028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.152 [2024-07-14 21:29:12.620040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.152 [2024-07-14 21:29:12.620063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.152 [2024-07-14 21:29:12.620079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:01.152 [2024-07-14 21:29:12.620089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.152 [2024-07-14 21:29:12.620101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.410 [2024-07-14 21:29:12.711135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.410 [2024-07-14 21:29:12.711270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:01.410 [2024-07-14 21:29:12.711305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.410 [2024-07-14 21:29:12.711319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.410 [2024-07-14 21:29:12.803134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.410 [2024-07-14 21:29:12.803236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:01.410 [2024-07-14 21:29:12.803255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.410 [2024-07-14 21:29:12.803272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.410 [2024-07-14 21:29:12.803407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.410 [2024-07-14 21:29:12.803429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:01.410 [2024-07-14 21:29:12.803441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.410 [2024-07-14 21:29:12.803454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.410 [2024-07-14 21:29:12.803513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.410 [2024-07-14 21:29:12.803549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:01.410 [2024-07-14 21:29:12.803561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.410 [2024-07-14 21:29:12.803588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.410 [2024-07-14 21:29:12.803736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.410 [2024-07-14 21:29:12.803756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:01.410 [2024-07-14 21:29:12.803768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.410 [2024-07-14 21:29:12.803781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.410 [2024-07-14 21:29:12.803846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.410 [2024-07-14 21:29:12.803867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:01.410 [2024-07-14 21:29:12.803880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.410 [2024-07-14 21:29:12.803893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.410 [2024-07-14 21:29:12.803967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.410 [2024-07-14 21:29:12.803988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:01.410 [2024-07-14 21:29:12.804001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.410 [2024-07-14 21:29:12.804014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.410 [2024-07-14 21:29:12.804069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.410 [2024-07-14 21:29:12.804091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:01.410 [2024-07-14 21:29:12.804104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.410 [2024-07-14 21:29:12.804117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.410 [2024-07-14 21:29:12.804272] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 397.800 ms, result 0 00:30:01.410 true 00:30:01.410 21:29:12 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 86464 00:30:01.410 21:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@948 -- # '[' -z 86464 ']' 00:30:01.410 21:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # kill -0 86464 00:30:01.410 21:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@953 -- # uname 00:30:01.410 21:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:01.410 21:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86464 00:30:01.410 21:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:01.410 21:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:01.410 killing process with pid 86464 00:30:01.410 21:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86464' 00:30:01.410 21:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@967 -- # kill 86464 00:30:01.410 21:29:12 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # wait 86464 00:30:06.682 21:29:17 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:30:10.874 262144+0 records in 00:30:10.874 262144+0 records out 00:30:10.874 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.20436 s, 255 MB/s 00:30:10.874 21:29:22 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:12.780 21:29:23 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:12.780 [2024-07-14 21:29:23.923152] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:12.780 [2024-07-14 21:29:23.923318] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86680 ] 00:30:12.780 [2024-07-14 21:29:24.095388] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.780 [2024-07-14 21:29:24.317369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.038 [2024-07-14 21:29:24.568396] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:13.038 [2024-07-14 21:29:24.568482] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:13.299 [2024-07-14 21:29:24.724261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.299 [2024-07-14 21:29:24.724310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:13.299 [2024-07-14 21:29:24.724343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:13.299 [2024-07-14 21:29:24.724353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.299 [2024-07-14 21:29:24.724414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.299 [2024-07-14 21:29:24.724432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:13.299 [2024-07-14 21:29:24.724443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:30:13.299 [2024-07-14 21:29:24.724455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.299 [2024-07-14 21:29:24.724482] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:13.299 [2024-07-14 21:29:24.725430] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:13.299 [2024-07-14 21:29:24.725466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.299 [2024-07-14 21:29:24.725484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:13.299 [2024-07-14 21:29:24.725496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:30:13.299 [2024-07-14 21:29:24.725507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.299 [2024-07-14 21:29:24.726657] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:13.299 [2024-07-14 21:29:24.739892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.299 [2024-07-14 21:29:24.739929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:13.299 [2024-07-14 21:29:24.739959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.237 ms 00:30:13.299 [2024-07-14 21:29:24.739969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.299 [2024-07-14 21:29:24.740030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.299 [2024-07-14 21:29:24.740047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:13.299 [2024-07-14 21:29:24.740060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:30:13.299 [2024-07-14 21:29:24.740069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.299 [2024-07-14 21:29:24.744447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.299 [2024-07-14 21:29:24.744481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:13.299 [2024-07-14 21:29:24.744510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.301 ms 00:30:13.299 [2024-07-14 21:29:24.744519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.299 [2024-07-14 21:29:24.744622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.299 [2024-07-14 21:29:24.744642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:13.299 [2024-07-14 21:29:24.744652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:30:13.299 [2024-07-14 21:29:24.744662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.299 [2024-07-14 21:29:24.744715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.299 [2024-07-14 21:29:24.744730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:13.299 [2024-07-14 21:29:24.744740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:13.299 [2024-07-14 21:29:24.744749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.299 [2024-07-14 21:29:24.744777] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:13.299 [2024-07-14 21:29:24.748398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.299 [2024-07-14 21:29:24.748429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:13.299 [2024-07-14 21:29:24.748457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.628 ms 00:30:13.299 [2024-07-14 21:29:24.748466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.299 [2024-07-14 21:29:24.748504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.299 [2024-07-14 21:29:24.748516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:13.299 [2024-07-14 21:29:24.748526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:13.299 [2024-07-14 21:29:24.748536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.299 [2024-07-14 21:29:24.748584] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:13.299 [2024-07-14 21:29:24.748627] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:13.299 [2024-07-14 21:29:24.748664] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:13.299 [2024-07-14 21:29:24.748684] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:30:13.299 [2024-07-14 21:29:24.748778] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:13.299 [2024-07-14 21:29:24.748792] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:13.299 [2024-07-14 21:29:24.748805] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:30:13.299 [2024-07-14 21:29:24.748817] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:13.299 [2024-07-14 21:29:24.748843] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:13.299 [2024-07-14 21:29:24.748856] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:13.299 [2024-07-14 21:29:24.748866] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:13.299 [2024-07-14 21:29:24.748875] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:13.299 [2024-07-14 21:29:24.748884] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:13.299 [2024-07-14 21:29:24.748909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.299 [2024-07-14 21:29:24.748937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:13.299 [2024-07-14 21:29:24.748961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:30:13.299 [2024-07-14 21:29:24.748970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.299 [2024-07-14 21:29:24.749057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.299 [2024-07-14 21:29:24.749072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:13.299 [2024-07-14 21:29:24.749082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:30:13.299 [2024-07-14 21:29:24.749090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.299 [2024-07-14 21:29:24.749178] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:13.299 [2024-07-14 21:29:24.749193] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:13.299 [2024-07-14 21:29:24.749207] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:13.299 [2024-07-14 21:29:24.749217] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:13.299 [2024-07-14 21:29:24.749226] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:13.299 [2024-07-14 21:29:24.749235] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:13.299 [2024-07-14 21:29:24.749243] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:13.299 [2024-07-14 21:29:24.749253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:13.299 [2024-07-14 21:29:24.749262] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:13.299 [2024-07-14 21:29:24.749270] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:13.300 [2024-07-14 21:29:24.749278] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:13.300 [2024-07-14 21:29:24.749287] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:13.300 [2024-07-14 21:29:24.749297] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:13.300 [2024-07-14 21:29:24.749306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:13.300 [2024-07-14 21:29:24.749314] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:13.300 [2024-07-14 21:29:24.749322] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:13.300 [2024-07-14 21:29:24.749331] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:13.300 [2024-07-14 21:29:24.749339] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:13.300 [2024-07-14 21:29:24.749347] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:13.300 [2024-07-14 21:29:24.749355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:13.300 [2024-07-14 21:29:24.749375] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:13.300 [2024-07-14 21:29:24.749384] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:13.300 [2024-07-14 21:29:24.749392] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:13.300 [2024-07-14 21:29:24.749401] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:13.300 [2024-07-14 21:29:24.749409] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:13.300 [2024-07-14 21:29:24.749417] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:13.300 [2024-07-14 21:29:24.749425] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:13.300 [2024-07-14 21:29:24.749433] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:13.300 [2024-07-14 21:29:24.749441] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:13.300 [2024-07-14 21:29:24.749450] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:13.300 [2024-07-14 21:29:24.749458] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:13.300 [2024-07-14 21:29:24.749466] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:13.300 [2024-07-14 21:29:24.749474] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:13.300 [2024-07-14 21:29:24.749482] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:13.300 [2024-07-14 21:29:24.749490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:13.300 [2024-07-14 21:29:24.749499] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:13.300 [2024-07-14 21:29:24.749507] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:13.300 [2024-07-14 21:29:24.749515] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:13.300 [2024-07-14 21:29:24.749523] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:13.300 [2024-07-14 21:29:24.749531] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:13.300 [2024-07-14 21:29:24.749540] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:13.300 [2024-07-14 21:29:24.749548] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:13.300 [2024-07-14 21:29:24.749557] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:13.300 [2024-07-14 21:29:24.749565] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:13.300 [2024-07-14 21:29:24.749575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:13.300 [2024-07-14 21:29:24.749584] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:13.300 [2024-07-14 21:29:24.749593] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:13.300 [2024-07-14 21:29:24.749602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:13.300 [2024-07-14 21:29:24.749611] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:13.300 [2024-07-14 21:29:24.749619] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:13.300 [2024-07-14 21:29:24.749627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:13.300 [2024-07-14 21:29:24.749635] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:13.300 [2024-07-14 21:29:24.749644] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:13.300 [2024-07-14 21:29:24.749654] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:13.300 [2024-07-14 21:29:24.749665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:13.300 [2024-07-14 21:29:24.749676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:13.300 [2024-07-14 21:29:24.749685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:13.300 [2024-07-14 21:29:24.749695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:13.300 [2024-07-14 21:29:24.749704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:13.300 [2024-07-14 21:29:24.749713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:13.300 [2024-07-14 21:29:24.749722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:13.300 [2024-07-14 21:29:24.749731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:13.300 [2024-07-14 21:29:24.749740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:13.300 [2024-07-14 21:29:24.749750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:13.300 [2024-07-14 21:29:24.749759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:13.300 [2024-07-14 21:29:24.749768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:13.300 [2024-07-14 21:29:24.749777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:13.300 [2024-07-14 21:29:24.749787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:13.300 [2024-07-14 21:29:24.749796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:13.300 [2024-07-14 21:29:24.749805] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:13.300 [2024-07-14 21:29:24.749815] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:13.300 [2024-07-14 21:29:24.749826] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:13.300 [2024-07-14 21:29:24.749848] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:13.300 [2024-07-14 21:29:24.749861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:13.300 [2024-07-14 21:29:24.749870] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:13.300 [2024-07-14 21:29:24.749880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.300 [2024-07-14 21:29:24.749895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:13.300 [2024-07-14 21:29:24.749915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:30:13.300 [2024-07-14 21:29:24.749924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.300 [2024-07-14 21:29:24.784859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.300 [2024-07-14 21:29:24.785155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:13.300 [2024-07-14 21:29:24.785306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.881 ms 00:30:13.300 [2024-07-14 21:29:24.785419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.300 [2024-07-14 21:29:24.785564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.300 [2024-07-14 21:29:24.785681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:13.300 [2024-07-14 21:29:24.785823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:30:13.300 [2024-07-14 21:29:24.785937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.300 [2024-07-14 21:29:24.817250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.300 [2024-07-14 21:29:24.817484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:13.300 [2024-07-14 21:29:24.817640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.175 ms 00:30:13.300 [2024-07-14 21:29:24.817687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.300 [2024-07-14 21:29:24.817843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.300 [2024-07-14 21:29:24.817900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:13.300 [2024-07-14 21:29:24.817937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:13.300 [2024-07-14 21:29:24.817972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.300 [2024-07-14 21:29:24.818434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.300 [2024-07-14 21:29:24.818576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:13.300 [2024-07-14 21:29:24.818685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:30:13.300 [2024-07-14 21:29:24.818851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.300 [2024-07-14 21:29:24.819046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.300 [2024-07-14 21:29:24.819105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:13.300 [2024-07-14 21:29:24.819269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:30:13.300 [2024-07-14 21:29:24.819316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.300 [2024-07-14 21:29:24.832472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.300 [2024-07-14 21:29:24.832658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:13.300 [2024-07-14 21:29:24.832785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.102 ms 00:30:13.300 [2024-07-14 21:29:24.832849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.560 [2024-07-14 21:29:24.847321] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:13.560 [2024-07-14 21:29:24.847542] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:13.560 [2024-07-14 21:29:24.847675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.560 [2024-07-14 21:29:24.847778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:13.560 [2024-07-14 21:29:24.847878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.652 ms 00:30:13.560 [2024-07-14 21:29:24.847967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.560 [2024-07-14 21:29:24.874521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.560 [2024-07-14 21:29:24.874723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:13.560 [2024-07-14 21:29:24.874890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.434 ms 00:30:13.560 [2024-07-14 21:29:24.874943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.560 [2024-07-14 21:29:24.890259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.560 [2024-07-14 21:29:24.890442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:13.560 [2024-07-14 21:29:24.890603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.106 ms 00:30:13.560 [2024-07-14 21:29:24.890650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.560 [2024-07-14 21:29:24.904857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.560 [2024-07-14 21:29:24.904975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:13.560 [2024-07-14 21:29:24.904991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.159 ms 00:30:13.560 [2024-07-14 21:29:24.905000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.560 [2024-07-14 21:29:24.905782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.560 [2024-07-14 21:29:24.905871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:13.560 [2024-07-14 21:29:24.905889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:30:13.560 [2024-07-14 21:29:24.905900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.560 [2024-07-14 21:29:24.968065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.560 [2024-07-14 21:29:24.968124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:13.560 [2024-07-14 21:29:24.968157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.125 ms 00:30:13.560 [2024-07-14 21:29:24.968167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.560 [2024-07-14 21:29:24.979207] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:13.560 [2024-07-14 21:29:24.981635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.560 [2024-07-14 21:29:24.981667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:13.560 [2024-07-14 21:29:24.981698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.407 ms 00:30:13.560 [2024-07-14 21:29:24.981708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.560 [2024-07-14 21:29:24.981806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.560 [2024-07-14 21:29:24.981858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:13.560 [2024-07-14 21:29:24.981870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:13.560 [2024-07-14 21:29:24.981880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.560 [2024-07-14 21:29:24.981978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.560 [2024-07-14 21:29:24.982000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:13.560 [2024-07-14 21:29:24.982012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:13.560 [2024-07-14 21:29:24.982021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.560 [2024-07-14 21:29:24.982051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.560 [2024-07-14 21:29:24.982064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:13.560 [2024-07-14 21:29:24.982074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:13.560 [2024-07-14 21:29:24.982084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.560 [2024-07-14 21:29:24.982118] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:13.560 [2024-07-14 21:29:24.982148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.560 [2024-07-14 21:29:24.982174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:13.560 [2024-07-14 21:29:24.982205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:13.560 [2024-07-14 21:29:24.982216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.560 [2024-07-14 21:29:25.011097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.560 [2024-07-14 21:29:25.011154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:13.560 [2024-07-14 21:29:25.011201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.858 ms 00:30:13.560 [2024-07-14 21:29:25.011227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.560 [2024-07-14 21:29:25.011299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.560 [2024-07-14 21:29:25.011324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:13.560 [2024-07-14 21:29:25.011334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:13.560 [2024-07-14 21:29:25.011344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.560 [2024-07-14 21:29:25.012530] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 287.743 ms, result 0 00:30:57.592  Copying: 23/1024 [MB] (23 MBps) Copying: 46/1024 [MB] (23 MBps) Copying: 69/1024 [MB] (23 MBps) Copying: 92/1024 [MB] (23 MBps) Copying: 115/1024 [MB] (23 MBps) Copying: 138/1024 [MB] (22 MBps) Copying: 161/1024 [MB] (23 MBps) Copying: 185/1024 [MB] (23 MBps) Copying: 208/1024 [MB] (23 MBps) Copying: 231/1024 [MB] (23 MBps) Copying: 255/1024 [MB] (23 MBps) Copying: 278/1024 [MB] (23 MBps) Copying: 302/1024 [MB] (23 MBps) Copying: 325/1024 [MB] (23 MBps) Copying: 348/1024 [MB] (22 MBps) Copying: 372/1024 [MB] (23 MBps) Copying: 395/1024 [MB] (23 MBps) Copying: 418/1024 [MB] (23 MBps) Copying: 442/1024 [MB] (23 MBps) Copying: 465/1024 [MB] (23 MBps) Copying: 489/1024 [MB] (23 MBps) Copying: 513/1024 [MB] (23 MBps) Copying: 536/1024 [MB] (23 MBps) Copying: 559/1024 [MB] (23 MBps) Copying: 583/1024 [MB] (24 MBps) Copying: 607/1024 [MB] (23 MBps) Copying: 630/1024 [MB] (23 MBps) Copying: 654/1024 [MB] (23 MBps) Copying: 677/1024 [MB] (23 MBps) Copying: 700/1024 [MB] (23 MBps) Copying: 723/1024 [MB] (23 MBps) Copying: 747/1024 [MB] (23 MBps) Copying: 770/1024 [MB] (23 MBps) Copying: 794/1024 [MB] (23 MBps) Copying: 818/1024 [MB] (24 MBps) Copying: 842/1024 [MB] (24 MBps) Copying: 866/1024 [MB] (23 MBps) Copying: 889/1024 [MB] (23 MBps) Copying: 912/1024 [MB] (22 MBps) Copying: 935/1024 [MB] (22 MBps) Copying: 957/1024 [MB] (22 MBps) Copying: 980/1024 [MB] (22 MBps) Copying: 1003/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-14 21:30:08.874171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.592 [2024-07-14 21:30:08.874218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:57.592 [2024-07-14 21:30:08.874252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:57.592 [2024-07-14 21:30:08.874261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.592 [2024-07-14 21:30:08.874286] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:57.592 [2024-07-14 21:30:08.877234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.592 [2024-07-14 21:30:08.877267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:57.592 [2024-07-14 21:30:08.877279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.929 ms 00:30:57.592 [2024-07-14 21:30:08.877288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.592 [2024-07-14 21:30:08.879597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.592 [2024-07-14 21:30:08.879654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:57.592 [2024-07-14 21:30:08.879698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.285 ms 00:30:57.592 [2024-07-14 21:30:08.879708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.592 [2024-07-14 21:30:08.879735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.592 [2024-07-14 21:30:08.879747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:57.592 [2024-07-14 21:30:08.879758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:57.592 [2024-07-14 21:30:08.879767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.592 [2024-07-14 21:30:08.879813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.592 [2024-07-14 21:30:08.879836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:57.592 [2024-07-14 21:30:08.879851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:57.592 [2024-07-14 21:30:08.879860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.592 [2024-07-14 21:30:08.879876] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:57.592 [2024-07-14 21:30:08.879891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.879902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.879911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.879921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.879931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.879940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.879950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.879959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.879968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.879978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.879987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.879997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:57.592 [2024-07-14 21:30:08.880443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.880805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.881158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.881212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.881266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.881434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.881493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.881540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.881587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.881634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.881762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.881840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.881890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:57.593 [2024-07-14 21:30:08.881945] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:57.593 [2024-07-14 21:30:08.882099] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0deb0b22-1569-42dd-9079-fdc56cfdd0ab 00:30:57.593 [2024-07-14 21:30:08.882160] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:57.593 [2024-07-14 21:30:08.882192] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:57.593 [2024-07-14 21:30:08.882223] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:57.593 [2024-07-14 21:30:08.882302] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:57.593 [2024-07-14 21:30:08.882342] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:57.593 [2024-07-14 21:30:08.882381] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:57.593 [2024-07-14 21:30:08.882412] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:57.593 [2024-07-14 21:30:08.882442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:57.593 [2024-07-14 21:30:08.882474] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:57.593 [2024-07-14 21:30:08.882488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.593 [2024-07-14 21:30:08.882499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:57.593 [2024-07-14 21:30:08.882510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.612 ms 00:30:57.593 [2024-07-14 21:30:08.882519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.593 [2024-07-14 21:30:08.895753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.593 [2024-07-14 21:30:08.895935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:57.593 [2024-07-14 21:30:08.896091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.209 ms 00:30:57.593 [2024-07-14 21:30:08.896148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.593 [2024-07-14 21:30:08.896663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.593 [2024-07-14 21:30:08.896788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:57.593 [2024-07-14 21:30:08.896907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:30:57.593 [2024-07-14 21:30:08.897053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.593 [2024-07-14 21:30:08.925283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.593 [2024-07-14 21:30:08.925440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:57.593 [2024-07-14 21:30:08.925547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.593 [2024-07-14 21:30:08.925590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.593 [2024-07-14 21:30:08.925664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.593 [2024-07-14 21:30:08.925762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:57.593 [2024-07-14 21:30:08.925845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.593 [2024-07-14 21:30:08.925886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.593 [2024-07-14 21:30:08.925972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.593 [2024-07-14 21:30:08.926056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:57.593 [2024-07-14 21:30:08.926099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.593 [2024-07-14 21:30:08.926139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.593 [2024-07-14 21:30:08.926182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.593 [2024-07-14 21:30:08.926233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:57.593 [2024-07-14 21:30:08.926311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.593 [2024-07-14 21:30:08.926350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.593 [2024-07-14 21:30:09.006332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.593 [2024-07-14 21:30:09.006513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:57.593 [2024-07-14 21:30:09.006654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.593 [2024-07-14 21:30:09.006707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.593 [2024-07-14 21:30:09.071550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.593 [2024-07-14 21:30:09.071726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:57.593 [2024-07-14 21:30:09.071860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.593 [2024-07-14 21:30:09.071915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.593 [2024-07-14 21:30:09.072072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.593 [2024-07-14 21:30:09.072123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:57.593 [2024-07-14 21:30:09.072236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.593 [2024-07-14 21:30:09.072279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.593 [2024-07-14 21:30:09.072359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.593 [2024-07-14 21:30:09.072467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:57.593 [2024-07-14 21:30:09.072501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.593 [2024-07-14 21:30:09.072609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.593 [2024-07-14 21:30:09.072740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.593 [2024-07-14 21:30:09.072790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:57.593 [2024-07-14 21:30:09.072893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.593 [2024-07-14 21:30:09.072937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.593 [2024-07-14 21:30:09.073075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.593 [2024-07-14 21:30:09.073144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:57.593 [2024-07-14 21:30:09.073321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.593 [2024-07-14 21:30:09.073368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.593 [2024-07-14 21:30:09.073434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.593 [2024-07-14 21:30:09.073487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:57.593 [2024-07-14 21:30:09.073532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.594 [2024-07-14 21:30:09.073563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.594 [2024-07-14 21:30:09.073638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.594 [2024-07-14 21:30:09.073691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:57.594 [2024-07-14 21:30:09.073768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.594 [2024-07-14 21:30:09.073846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.594 [2024-07-14 21:30:09.074003] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 199.815 ms, result 0 00:30:58.529 00:30:58.529 00:30:58.529 21:30:10 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:30:58.788 [2024-07-14 21:30:10.109628] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:58.788 [2024-07-14 21:30:10.109821] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87124 ] 00:30:58.788 [2024-07-14 21:30:10.274600] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:59.046 [2024-07-14 21:30:10.421466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.306 [2024-07-14 21:30:10.667454] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:59.306 [2024-07-14 21:30:10.667521] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:59.306 [2024-07-14 21:30:10.822011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.306 [2024-07-14 21:30:10.822054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:59.306 [2024-07-14 21:30:10.822071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:59.306 [2024-07-14 21:30:10.822081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.306 [2024-07-14 21:30:10.822141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.306 [2024-07-14 21:30:10.822159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:59.306 [2024-07-14 21:30:10.822169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:59.306 [2024-07-14 21:30:10.822181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.306 [2024-07-14 21:30:10.822207] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:59.306 [2024-07-14 21:30:10.822966] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:59.306 [2024-07-14 21:30:10.822990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.306 [2024-07-14 21:30:10.823004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:59.306 [2024-07-14 21:30:10.823015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:30:59.306 [2024-07-14 21:30:10.823025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.306 [2024-07-14 21:30:10.823461] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:59.306 [2024-07-14 21:30:10.823502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.306 [2024-07-14 21:30:10.823515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:59.306 [2024-07-14 21:30:10.823527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:30:59.306 [2024-07-14 21:30:10.823543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.306 [2024-07-14 21:30:10.823596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.306 [2024-07-14 21:30:10.823611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:59.306 [2024-07-14 21:30:10.823636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:30:59.306 [2024-07-14 21:30:10.823645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.306 [2024-07-14 21:30:10.824053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.306 [2024-07-14 21:30:10.824071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:59.306 [2024-07-14 21:30:10.824082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:30:59.306 [2024-07-14 21:30:10.824097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.306 [2024-07-14 21:30:10.824195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.306 [2024-07-14 21:30:10.824211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:59.306 [2024-07-14 21:30:10.824221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:30:59.306 [2024-07-14 21:30:10.824230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.306 [2024-07-14 21:30:10.824259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.306 [2024-07-14 21:30:10.824271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:59.306 [2024-07-14 21:30:10.824282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:59.306 [2024-07-14 21:30:10.824291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.306 [2024-07-14 21:30:10.824318] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:59.306 [2024-07-14 21:30:10.828057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.306 [2024-07-14 21:30:10.828089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:59.306 [2024-07-14 21:30:10.828105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.744 ms 00:30:59.306 [2024-07-14 21:30:10.828114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.306 [2024-07-14 21:30:10.828149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.306 [2024-07-14 21:30:10.828162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:59.306 [2024-07-14 21:30:10.828172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:59.306 [2024-07-14 21:30:10.828180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.306 [2024-07-14 21:30:10.828227] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:59.306 [2024-07-14 21:30:10.828254] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:59.306 [2024-07-14 21:30:10.828286] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:59.306 [2024-07-14 21:30:10.828304] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:30:59.306 [2024-07-14 21:30:10.828383] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:59.306 [2024-07-14 21:30:10.828395] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:59.306 [2024-07-14 21:30:10.828406] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:30:59.306 [2024-07-14 21:30:10.828418] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:59.306 [2024-07-14 21:30:10.828428] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:59.306 [2024-07-14 21:30:10.828438] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:59.306 [2024-07-14 21:30:10.828446] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:59.306 [2024-07-14 21:30:10.828454] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:59.306 [2024-07-14 21:30:10.828466] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:59.306 [2024-07-14 21:30:10.828475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.306 [2024-07-14 21:30:10.828484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:59.306 [2024-07-14 21:30:10.828494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:30:59.306 [2024-07-14 21:30:10.828502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.306 [2024-07-14 21:30:10.828578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.306 [2024-07-14 21:30:10.828608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:59.306 [2024-07-14 21:30:10.828618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:30:59.306 [2024-07-14 21:30:10.828628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.306 [2024-07-14 21:30:10.828722] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:59.306 [2024-07-14 21:30:10.828739] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:59.306 [2024-07-14 21:30:10.828749] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:59.306 [2024-07-14 21:30:10.828759] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:59.306 [2024-07-14 21:30:10.828769] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:59.306 [2024-07-14 21:30:10.828777] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:59.307 [2024-07-14 21:30:10.828787] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:59.307 [2024-07-14 21:30:10.828797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:59.307 [2024-07-14 21:30:10.828806] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:59.307 [2024-07-14 21:30:10.828830] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:59.307 [2024-07-14 21:30:10.828840] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:59.307 [2024-07-14 21:30:10.828849] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:59.307 [2024-07-14 21:30:10.828859] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:59.307 [2024-07-14 21:30:10.828868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:59.307 [2024-07-14 21:30:10.828877] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:59.307 [2024-07-14 21:30:10.828886] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:59.307 [2024-07-14 21:30:10.828895] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:59.307 [2024-07-14 21:30:10.828918] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:59.307 [2024-07-14 21:30:10.828941] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:59.307 [2024-07-14 21:30:10.828949] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:59.307 [2024-07-14 21:30:10.828958] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:59.307 [2024-07-14 21:30:10.828966] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:59.307 [2024-07-14 21:30:10.829002] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:59.307 [2024-07-14 21:30:10.829011] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:59.307 [2024-07-14 21:30:10.829018] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:59.307 [2024-07-14 21:30:10.829026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:59.307 [2024-07-14 21:30:10.829034] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:59.307 [2024-07-14 21:30:10.829042] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:59.307 [2024-07-14 21:30:10.829050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:59.307 [2024-07-14 21:30:10.829058] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:59.307 [2024-07-14 21:30:10.829066] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:59.307 [2024-07-14 21:30:10.829074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:59.307 [2024-07-14 21:30:10.829081] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:59.307 [2024-07-14 21:30:10.829089] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:59.307 [2024-07-14 21:30:10.829098] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:59.307 [2024-07-14 21:30:10.829105] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:59.307 [2024-07-14 21:30:10.829113] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:59.307 [2024-07-14 21:30:10.829121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:59.307 [2024-07-14 21:30:10.829129] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:59.307 [2024-07-14 21:30:10.829137] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:59.307 [2024-07-14 21:30:10.829145] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:59.307 [2024-07-14 21:30:10.829152] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:59.307 [2024-07-14 21:30:10.829161] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:59.307 [2024-07-14 21:30:10.829169] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:59.307 [2024-07-14 21:30:10.829179] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:59.307 [2024-07-14 21:30:10.829188] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:59.307 [2024-07-14 21:30:10.829198] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:59.307 [2024-07-14 21:30:10.829208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:59.307 [2024-07-14 21:30:10.829218] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:59.307 [2024-07-14 21:30:10.829227] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:59.307 [2024-07-14 21:30:10.829236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:59.307 [2024-07-14 21:30:10.829245] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:59.307 [2024-07-14 21:30:10.829254] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:59.307 [2024-07-14 21:30:10.829264] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:59.307 [2024-07-14 21:30:10.829276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:59.307 [2024-07-14 21:30:10.829295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:59.307 [2024-07-14 21:30:10.829304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:59.307 [2024-07-14 21:30:10.829313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:59.307 [2024-07-14 21:30:10.829322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:59.307 [2024-07-14 21:30:10.829331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:59.307 [2024-07-14 21:30:10.829340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:59.307 [2024-07-14 21:30:10.829348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:59.307 [2024-07-14 21:30:10.829357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:59.307 [2024-07-14 21:30:10.829366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:59.307 [2024-07-14 21:30:10.829375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:59.307 [2024-07-14 21:30:10.829384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:59.307 [2024-07-14 21:30:10.829392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:59.307 [2024-07-14 21:30:10.829401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:59.307 [2024-07-14 21:30:10.829410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:59.307 [2024-07-14 21:30:10.829419] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:59.307 [2024-07-14 21:30:10.829433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:59.307 [2024-07-14 21:30:10.829443] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:59.307 [2024-07-14 21:30:10.829453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:59.307 [2024-07-14 21:30:10.829462] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:59.307 [2024-07-14 21:30:10.829471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:59.307 [2024-07-14 21:30:10.829481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.307 [2024-07-14 21:30:10.829492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:59.307 [2024-07-14 21:30:10.829501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.815 ms 00:30:59.307 [2024-07-14 21:30:10.829510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.567 [2024-07-14 21:30:10.861501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.567 [2024-07-14 21:30:10.861701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:59.567 [2024-07-14 21:30:10.861888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.945 ms 00:30:59.567 [2024-07-14 21:30:10.861938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.567 [2024-07-14 21:30:10.862129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.567 [2024-07-14 21:30:10.862265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:59.567 [2024-07-14 21:30:10.862368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:30:59.567 [2024-07-14 21:30:10.862414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.567 [2024-07-14 21:30:10.892890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.567 [2024-07-14 21:30:10.893076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:59.567 [2024-07-14 21:30:10.893176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.278 ms 00:30:59.567 [2024-07-14 21:30:10.893221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.567 [2024-07-14 21:30:10.893292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.567 [2024-07-14 21:30:10.893388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:59.567 [2024-07-14 21:30:10.893440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:59.567 [2024-07-14 21:30:10.893473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.567 [2024-07-14 21:30:10.893628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.567 [2024-07-14 21:30:10.893677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:59.567 [2024-07-14 21:30:10.893712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:59.567 [2024-07-14 21:30:10.893744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.567 [2024-07-14 21:30:10.893999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.567 [2024-07-14 21:30:10.894126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:59.567 [2024-07-14 21:30:10.894246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:30:59.567 [2024-07-14 21:30:10.894372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.567 [2024-07-14 21:30:10.907623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.567 [2024-07-14 21:30:10.907781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:59.567 [2024-07-14 21:30:10.907940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.185 ms 00:30:59.567 [2024-07-14 21:30:10.907961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.567 [2024-07-14 21:30:10.908108] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:59.567 [2024-07-14 21:30:10.908132] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:59.567 [2024-07-14 21:30:10.908145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.567 [2024-07-14 21:30:10.908155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:59.567 [2024-07-14 21:30:10.908168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:30:59.567 [2024-07-14 21:30:10.908192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.567 [2024-07-14 21:30:10.919160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.567 [2024-07-14 21:30:10.919190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:59.567 [2024-07-14 21:30:10.919202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.934 ms 00:30:59.567 [2024-07-14 21:30:10.919211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.567 [2024-07-14 21:30:10.919306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.567 [2024-07-14 21:30:10.919319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:59.567 [2024-07-14 21:30:10.919329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:30:59.568 [2024-07-14 21:30:10.919338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.568 [2024-07-14 21:30:10.919384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.568 [2024-07-14 21:30:10.919398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:59.568 [2024-07-14 21:30:10.919414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:30:59.568 [2024-07-14 21:30:10.919422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.568 [2024-07-14 21:30:10.920034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.568 [2024-07-14 21:30:10.920053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:59.568 [2024-07-14 21:30:10.920064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:30:59.568 [2024-07-14 21:30:10.920073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.568 [2024-07-14 21:30:10.920092] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:59.568 [2024-07-14 21:30:10.920106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.568 [2024-07-14 21:30:10.920127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:59.568 [2024-07-14 21:30:10.920141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:59.568 [2024-07-14 21:30:10.920165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.568 [2024-07-14 21:30:10.930513] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:59.568 [2024-07-14 21:30:10.930686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.568 [2024-07-14 21:30:10.930702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:59.568 [2024-07-14 21:30:10.930713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.451 ms 00:30:59.568 [2024-07-14 21:30:10.930722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.568 [2024-07-14 21:30:10.932601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.568 [2024-07-14 21:30:10.932630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:59.568 [2024-07-14 21:30:10.932642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.851 ms 00:30:59.568 [2024-07-14 21:30:10.932655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.568 [2024-07-14 21:30:10.932734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.568 [2024-07-14 21:30:10.932751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:59.568 [2024-07-14 21:30:10.932761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:30:59.568 [2024-07-14 21:30:10.932769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.568 [2024-07-14 21:30:10.932822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.568 [2024-07-14 21:30:10.932837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:59.568 [2024-07-14 21:30:10.932847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:59.568 [2024-07-14 21:30:10.932856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.568 [2024-07-14 21:30:10.932888] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:59.568 [2024-07-14 21:30:10.932902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.568 [2024-07-14 21:30:10.932912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:59.568 [2024-07-14 21:30:10.932929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:59.568 [2024-07-14 21:30:10.932938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.568 [2024-07-14 21:30:10.957660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.568 [2024-07-14 21:30:10.957698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:59.568 [2024-07-14 21:30:10.957713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.700 ms 00:30:59.568 [2024-07-14 21:30:10.957728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.568 [2024-07-14 21:30:10.957792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.568 [2024-07-14 21:30:10.957841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:59.568 [2024-07-14 21:30:10.957854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:30:59.568 [2024-07-14 21:30:10.957863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.568 [2024-07-14 21:30:10.959069] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 136.430 ms, result 0 00:31:44.182  Copying: 23/1024 [MB] (23 MBps) Copying: 46/1024 [MB] (23 MBps) Copying: 69/1024 [MB] (23 MBps) Copying: 92/1024 [MB] (23 MBps) Copying: 115/1024 [MB] (22 MBps) Copying: 138/1024 [MB] (23 MBps) Copying: 161/1024 [MB] (23 MBps) Copying: 184/1024 [MB] (23 MBps) Copying: 207/1024 [MB] (23 MBps) Copying: 230/1024 [MB] (22 MBps) Copying: 253/1024 [MB] (22 MBps) Copying: 275/1024 [MB] (22 MBps) Copying: 298/1024 [MB] (22 MBps) Copying: 321/1024 [MB] (22 MBps) Copying: 344/1024 [MB] (23 MBps) Copying: 367/1024 [MB] (22 MBps) Copying: 390/1024 [MB] (23 MBps) Copying: 413/1024 [MB] (23 MBps) Copying: 436/1024 [MB] (22 MBps) Copying: 459/1024 [MB] (22 MBps) Copying: 482/1024 [MB] (22 MBps) Copying: 505/1024 [MB] (23 MBps) Copying: 528/1024 [MB] (23 MBps) Copying: 551/1024 [MB] (23 MBps) Copying: 574/1024 [MB] (22 MBps) Copying: 597/1024 [MB] (22 MBps) Copying: 620/1024 [MB] (23 MBps) Copying: 643/1024 [MB] (23 MBps) Copying: 666/1024 [MB] (22 MBps) Copying: 689/1024 [MB] (22 MBps) Copying: 712/1024 [MB] (22 MBps) Copying: 735/1024 [MB] (23 MBps) Copying: 758/1024 [MB] (23 MBps) Copying: 781/1024 [MB] (22 MBps) Copying: 804/1024 [MB] (22 MBps) Copying: 828/1024 [MB] (23 MBps) Copying: 850/1024 [MB] (22 MBps) Copying: 873/1024 [MB] (22 MBps) Copying: 896/1024 [MB] (23 MBps) Copying: 919/1024 [MB] (22 MBps) Copying: 942/1024 [MB] (22 MBps) Copying: 965/1024 [MB] (22 MBps) Copying: 988/1024 [MB] (23 MBps) Copying: 1012/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-14 21:30:55.688983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.182 [2024-07-14 21:30:55.689098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:44.182 [2024-07-14 21:30:55.689121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:44.182 [2024-07-14 21:30:55.689131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.182 [2024-07-14 21:30:55.689162] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:44.182 [2024-07-14 21:30:55.692495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.182 [2024-07-14 21:30:55.692531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:44.182 [2024-07-14 21:30:55.692551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.313 ms 00:31:44.182 [2024-07-14 21:30:55.692560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.182 [2024-07-14 21:30:55.692841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.182 [2024-07-14 21:30:55.692861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:44.182 [2024-07-14 21:30:55.692873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:31:44.182 [2024-07-14 21:30:55.692883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.182 [2024-07-14 21:30:55.692914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.182 [2024-07-14 21:30:55.692962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:44.182 [2024-07-14 21:30:55.692989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:44.182 [2024-07-14 21:30:55.693019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.182 [2024-07-14 21:30:55.693071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.182 [2024-07-14 21:30:55.693085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:44.182 [2024-07-14 21:30:55.693095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:31:44.183 [2024-07-14 21:30:55.693104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.183 [2024-07-14 21:30:55.693121] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:44.183 [2024-07-14 21:30:55.693136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.693990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.694001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.694012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.694023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.694033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.694044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.694054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.694065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.694076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.694087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.694098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.694108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.694119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.694130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.694141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.694151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.694164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.694175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.694186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.694196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.694222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.694233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.694243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:44.183 [2024-07-14 21:30:55.694253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:44.184 [2024-07-14 21:30:55.694264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:44.184 [2024-07-14 21:30:55.694274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:44.184 [2024-07-14 21:30:55.694284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:44.184 [2024-07-14 21:30:55.694294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:44.184 [2024-07-14 21:30:55.694305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:44.184 [2024-07-14 21:30:55.694323] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:44.184 [2024-07-14 21:30:55.694333] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0deb0b22-1569-42dd-9079-fdc56cfdd0ab 00:31:44.184 [2024-07-14 21:30:55.694344] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:44.184 [2024-07-14 21:30:55.694353] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:31:44.184 [2024-07-14 21:30:55.694367] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:44.184 [2024-07-14 21:30:55.694378] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:44.184 [2024-07-14 21:30:55.694387] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:44.184 [2024-07-14 21:30:55.694397] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:44.184 [2024-07-14 21:30:55.694407] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:44.184 [2024-07-14 21:30:55.694416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:44.184 [2024-07-14 21:30:55.694425] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:44.184 [2024-07-14 21:30:55.694434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.184 [2024-07-14 21:30:55.694445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:44.184 [2024-07-14 21:30:55.694455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.315 ms 00:31:44.184 [2024-07-14 21:30:55.694465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.184 [2024-07-14 21:30:55.708046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.184 [2024-07-14 21:30:55.708080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:44.184 [2024-07-14 21:30:55.708094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.561 ms 00:31:44.184 [2024-07-14 21:30:55.708103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.184 [2024-07-14 21:30:55.708430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.184 [2024-07-14 21:30:55.708449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:44.184 [2024-07-14 21:30:55.708460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:31:44.184 [2024-07-14 21:30:55.708469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.444 [2024-07-14 21:30:55.740761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.444 [2024-07-14 21:30:55.740831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:44.444 [2024-07-14 21:30:55.740847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.444 [2024-07-14 21:30:55.740858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.444 [2024-07-14 21:30:55.740927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.444 [2024-07-14 21:30:55.740955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:44.444 [2024-07-14 21:30:55.740965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.444 [2024-07-14 21:30:55.740974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.444 [2024-07-14 21:30:55.741043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.444 [2024-07-14 21:30:55.741065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:44.444 [2024-07-14 21:30:55.741076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.444 [2024-07-14 21:30:55.741085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.444 [2024-07-14 21:30:55.741114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.444 [2024-07-14 21:30:55.741126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:44.444 [2024-07-14 21:30:55.741136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.444 [2024-07-14 21:30:55.741145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.444 [2024-07-14 21:30:55.815716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.444 [2024-07-14 21:30:55.815775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:44.444 [2024-07-14 21:30:55.815791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.444 [2024-07-14 21:30:55.815827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.444 [2024-07-14 21:30:55.879960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.444 [2024-07-14 21:30:55.880005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:44.444 [2024-07-14 21:30:55.880019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.444 [2024-07-14 21:30:55.880028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.444 [2024-07-14 21:30:55.880087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.444 [2024-07-14 21:30:55.880101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:44.444 [2024-07-14 21:30:55.880118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.444 [2024-07-14 21:30:55.880126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.444 [2024-07-14 21:30:55.880163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.444 [2024-07-14 21:30:55.880175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:44.444 [2024-07-14 21:30:55.880185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.444 [2024-07-14 21:30:55.880194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.444 [2024-07-14 21:30:55.880271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.444 [2024-07-14 21:30:55.880287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:44.444 [2024-07-14 21:30:55.880301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.444 [2024-07-14 21:30:55.880310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.444 [2024-07-14 21:30:55.880345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.444 [2024-07-14 21:30:55.880361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:44.444 [2024-07-14 21:30:55.880370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.444 [2024-07-14 21:30:55.880384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.444 [2024-07-14 21:30:55.880421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.444 [2024-07-14 21:30:55.880433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:44.444 [2024-07-14 21:30:55.880442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.444 [2024-07-14 21:30:55.880455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.444 [2024-07-14 21:30:55.880496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.444 [2024-07-14 21:30:55.880509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:44.444 [2024-07-14 21:30:55.880518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.444 [2024-07-14 21:30:55.880527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.444 [2024-07-14 21:30:55.880667] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 191.684 ms, result 0 00:31:45.383 00:31:45.383 00:31:45.383 21:30:56 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:47.289 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:47.289 21:30:58 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:31:47.289 [2024-07-14 21:30:58.573000] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:47.289 [2024-07-14 21:30:58.573166] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87581 ] 00:31:47.289 [2024-07-14 21:30:58.747174] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.548 [2024-07-14 21:30:58.945858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.806 [2024-07-14 21:30:59.206942] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:47.806 [2024-07-14 21:30:59.207025] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:48.068 [2024-07-14 21:30:59.361738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.068 [2024-07-14 21:30:59.361782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:48.068 [2024-07-14 21:30:59.361843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:48.068 [2024-07-14 21:30:59.361856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.068 [2024-07-14 21:30:59.361918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.068 [2024-07-14 21:30:59.361952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:48.068 [2024-07-14 21:30:59.361964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:31:48.068 [2024-07-14 21:30:59.361977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.068 [2024-07-14 21:30:59.362004] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:48.068 [2024-07-14 21:30:59.362886] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:48.068 [2024-07-14 21:30:59.362924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.068 [2024-07-14 21:30:59.362940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:48.068 [2024-07-14 21:30:59.362951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.925 ms 00:31:48.068 [2024-07-14 21:30:59.362961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.068 [2024-07-14 21:30:59.363394] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:48.068 [2024-07-14 21:30:59.363435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.068 [2024-07-14 21:30:59.363448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:48.068 [2024-07-14 21:30:59.363459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:31:48.068 [2024-07-14 21:30:59.363475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.068 [2024-07-14 21:30:59.363539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.068 [2024-07-14 21:30:59.363553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:48.068 [2024-07-14 21:30:59.363563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:31:48.068 [2024-07-14 21:30:59.363572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.068 [2024-07-14 21:30:59.363979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.068 [2024-07-14 21:30:59.363997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:48.068 [2024-07-14 21:30:59.364009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:31:48.068 [2024-07-14 21:30:59.364021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.068 [2024-07-14 21:30:59.364090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.068 [2024-07-14 21:30:59.364107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:48.068 [2024-07-14 21:30:59.364117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:31:48.068 [2024-07-14 21:30:59.364125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.068 [2024-07-14 21:30:59.364156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.068 [2024-07-14 21:30:59.364170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:48.068 [2024-07-14 21:30:59.364180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:48.068 [2024-07-14 21:30:59.364189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.068 [2024-07-14 21:30:59.364231] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:48.068 [2024-07-14 21:30:59.368037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.068 [2024-07-14 21:30:59.368190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:48.068 [2024-07-14 21:30:59.368341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.811 ms 00:31:48.068 [2024-07-14 21:30:59.368387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.068 [2024-07-14 21:30:59.368455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.068 [2024-07-14 21:30:59.368559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:48.068 [2024-07-14 21:30:59.368658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:48.068 [2024-07-14 21:30:59.368697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.068 [2024-07-14 21:30:59.368784] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:48.068 [2024-07-14 21:30:59.369013] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:48.068 [2024-07-14 21:30:59.369105] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:48.068 [2024-07-14 21:30:59.369276] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:31:48.068 [2024-07-14 21:30:59.369412] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:48.068 [2024-07-14 21:30:59.369529] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:48.068 [2024-07-14 21:30:59.369585] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:31:48.068 [2024-07-14 21:30:59.369635] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:48.068 [2024-07-14 21:30:59.369810] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:48.068 [2024-07-14 21:30:59.369960] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:48.068 [2024-07-14 21:30:59.370002] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:48.068 [2024-07-14 21:30:59.370085] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:48.068 [2024-07-14 21:30:59.370110] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:48.068 [2024-07-14 21:30:59.370122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.068 [2024-07-14 21:30:59.370132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:48.068 [2024-07-14 21:30:59.370159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.340 ms 00:31:48.068 [2024-07-14 21:30:59.370183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.068 [2024-07-14 21:30:59.370273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.068 [2024-07-14 21:30:59.370288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:48.068 [2024-07-14 21:30:59.370299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:31:48.068 [2024-07-14 21:30:59.370308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.068 [2024-07-14 21:30:59.370418] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:48.068 [2024-07-14 21:30:59.370434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:48.068 [2024-07-14 21:30:59.370445] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:48.068 [2024-07-14 21:30:59.370455] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:48.068 [2024-07-14 21:30:59.370464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:48.068 [2024-07-14 21:30:59.370473] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:48.068 [2024-07-14 21:30:59.370482] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:48.068 [2024-07-14 21:30:59.370491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:48.068 [2024-07-14 21:30:59.370499] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:48.068 [2024-07-14 21:30:59.370508] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:48.068 [2024-07-14 21:30:59.370516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:48.068 [2024-07-14 21:30:59.370525] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:48.068 [2024-07-14 21:30:59.370533] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:48.068 [2024-07-14 21:30:59.370541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:48.068 [2024-07-14 21:30:59.370565] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:48.068 [2024-07-14 21:30:59.370573] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:48.068 [2024-07-14 21:30:59.370581] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:48.068 [2024-07-14 21:30:59.370589] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:48.068 [2024-07-14 21:30:59.370597] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:48.068 [2024-07-14 21:30:59.370606] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:48.068 [2024-07-14 21:30:59.370614] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:48.068 [2024-07-14 21:30:59.370622] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:48.068 [2024-07-14 21:30:59.370642] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:48.068 [2024-07-14 21:30:59.370651] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:48.068 [2024-07-14 21:30:59.370659] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:48.068 [2024-07-14 21:30:59.370667] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:48.068 [2024-07-14 21:30:59.370676] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:48.068 [2024-07-14 21:30:59.370684] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:48.068 [2024-07-14 21:30:59.370693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:48.068 [2024-07-14 21:30:59.370702] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:48.068 [2024-07-14 21:30:59.370710] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:48.068 [2024-07-14 21:30:59.370720] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:48.068 [2024-07-14 21:30:59.370729] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:48.068 [2024-07-14 21:30:59.370737] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:48.068 [2024-07-14 21:30:59.370745] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:48.068 [2024-07-14 21:30:59.370754] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:48.068 [2024-07-14 21:30:59.370762] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:48.068 [2024-07-14 21:30:59.370770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:48.068 [2024-07-14 21:30:59.370779] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:48.068 [2024-07-14 21:30:59.370787] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:48.068 [2024-07-14 21:30:59.370795] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:48.068 [2024-07-14 21:30:59.370819] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:48.068 [2024-07-14 21:30:59.370827] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:48.068 [2024-07-14 21:30:59.370835] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:48.068 [2024-07-14 21:30:59.370845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:48.068 [2024-07-14 21:30:59.370853] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:48.069 [2024-07-14 21:30:59.370863] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:48.069 [2024-07-14 21:30:59.370886] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:48.069 [2024-07-14 21:30:59.370896] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:48.069 [2024-07-14 21:30:59.370905] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:48.069 [2024-07-14 21:30:59.370914] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:48.069 [2024-07-14 21:30:59.370922] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:48.069 [2024-07-14 21:30:59.370931] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:48.069 [2024-07-14 21:30:59.370941] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:48.069 [2024-07-14 21:30:59.370953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:48.069 [2024-07-14 21:30:59.370964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:48.069 [2024-07-14 21:30:59.370973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:48.069 [2024-07-14 21:30:59.370982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:48.069 [2024-07-14 21:30:59.370992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:48.069 [2024-07-14 21:30:59.371001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:48.069 [2024-07-14 21:30:59.371010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:48.069 [2024-07-14 21:30:59.371019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:48.069 [2024-07-14 21:30:59.371029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:48.069 [2024-07-14 21:30:59.371039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:48.069 [2024-07-14 21:30:59.371048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:48.069 [2024-07-14 21:30:59.371057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:48.069 [2024-07-14 21:30:59.371066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:48.069 [2024-07-14 21:30:59.371075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:48.069 [2024-07-14 21:30:59.371085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:48.069 [2024-07-14 21:30:59.371094] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:48.069 [2024-07-14 21:30:59.371125] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:48.069 [2024-07-14 21:30:59.371136] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:48.069 [2024-07-14 21:30:59.371146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:48.069 [2024-07-14 21:30:59.371156] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:48.069 [2024-07-14 21:30:59.371166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:48.069 [2024-07-14 21:30:59.371177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.069 [2024-07-14 21:30:59.371187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:48.069 [2024-07-14 21:30:59.371196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.831 ms 00:31:48.069 [2024-07-14 21:30:59.371205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.069 [2024-07-14 21:30:59.404545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.069 [2024-07-14 21:30:59.404595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:48.069 [2024-07-14 21:30:59.404629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.280 ms 00:31:48.069 [2024-07-14 21:30:59.404639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.069 [2024-07-14 21:30:59.404731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.069 [2024-07-14 21:30:59.404745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:48.069 [2024-07-14 21:30:59.404755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:31:48.069 [2024-07-14 21:30:59.404764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.069 [2024-07-14 21:30:59.434515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.069 [2024-07-14 21:30:59.434557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:48.069 [2024-07-14 21:30:59.434572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.632 ms 00:31:48.069 [2024-07-14 21:30:59.434580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.069 [2024-07-14 21:30:59.434622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.069 [2024-07-14 21:30:59.434635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:48.069 [2024-07-14 21:30:59.434650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:48.069 [2024-07-14 21:30:59.434658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.069 [2024-07-14 21:30:59.434770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.069 [2024-07-14 21:30:59.434785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:48.069 [2024-07-14 21:30:59.434811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:31:48.069 [2024-07-14 21:30:59.434839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.069 [2024-07-14 21:30:59.434977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.069 [2024-07-14 21:30:59.434993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:48.069 [2024-07-14 21:30:59.435004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:31:48.069 [2024-07-14 21:30:59.435016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.069 [2024-07-14 21:30:59.447853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.069 [2024-07-14 21:30:59.447887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:48.069 [2024-07-14 21:30:59.447921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.812 ms 00:31:48.069 [2024-07-14 21:30:59.447930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.069 [2024-07-14 21:30:59.448056] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:48.069 [2024-07-14 21:30:59.448092] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:48.069 [2024-07-14 21:30:59.448120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.069 [2024-07-14 21:30:59.448146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:48.069 [2024-07-14 21:30:59.448156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:31:48.069 [2024-07-14 21:30:59.448166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.069 [2024-07-14 21:30:59.458840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.069 [2024-07-14 21:30:59.458868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:48.069 [2024-07-14 21:30:59.458880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.651 ms 00:31:48.069 [2024-07-14 21:30:59.458889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.069 [2024-07-14 21:30:59.458984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.069 [2024-07-14 21:30:59.458998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:48.069 [2024-07-14 21:30:59.459008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:31:48.069 [2024-07-14 21:30:59.459017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.069 [2024-07-14 21:30:59.459062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.069 [2024-07-14 21:30:59.459077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:48.069 [2024-07-14 21:30:59.459099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:31:48.069 [2024-07-14 21:30:59.459108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.069 [2024-07-14 21:30:59.459658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.069 [2024-07-14 21:30:59.459672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:48.069 [2024-07-14 21:30:59.459682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:31:48.069 [2024-07-14 21:30:59.459691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.069 [2024-07-14 21:30:59.459707] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:48.069 [2024-07-14 21:30:59.459719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.069 [2024-07-14 21:30:59.459738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:48.069 [2024-07-14 21:30:59.459752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:48.069 [2024-07-14 21:30:59.459760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.069 [2024-07-14 21:30:59.470018] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:48.069 [2024-07-14 21:30:59.470194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.069 [2024-07-14 21:30:59.470211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:48.069 [2024-07-14 21:30:59.470221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.413 ms 00:31:48.069 [2024-07-14 21:30:59.470230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.069 [2024-07-14 21:30:59.472056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.069 [2024-07-14 21:30:59.472082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:48.069 [2024-07-14 21:30:59.472109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.804 ms 00:31:48.069 [2024-07-14 21:30:59.472122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.069 [2024-07-14 21:30:59.472204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.069 [2024-07-14 21:30:59.472220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:48.069 [2024-07-14 21:30:59.472246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:31:48.069 [2024-07-14 21:30:59.472254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.069 [2024-07-14 21:30:59.472280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.069 [2024-07-14 21:30:59.472291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:48.069 [2024-07-14 21:30:59.472300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:48.069 [2024-07-14 21:30:59.472308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.069 [2024-07-14 21:30:59.472343] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:48.069 [2024-07-14 21:30:59.472355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.069 [2024-07-14 21:30:59.472364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:48.069 [2024-07-14 21:30:59.472372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:48.069 [2024-07-14 21:30:59.472380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.070 [2024-07-14 21:30:59.497054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.070 [2024-07-14 21:30:59.497090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:48.070 [2024-07-14 21:30:59.497104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.656 ms 00:31:48.070 [2024-07-14 21:30:59.497119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.070 [2024-07-14 21:30:59.497182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.070 [2024-07-14 21:30:59.497198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:48.070 [2024-07-14 21:30:59.497207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:31:48.070 [2024-07-14 21:30:59.497215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.070 [2024-07-14 21:30:59.498582] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 136.290 ms, result 0 00:32:33.372  Copying: 23/1024 [MB] (23 MBps) Copying: 47/1024 [MB] (23 MBps) Copying: 70/1024 [MB] (23 MBps) Copying: 94/1024 [MB] (23 MBps) Copying: 118/1024 [MB] (23 MBps) Copying: 141/1024 [MB] (23 MBps) Copying: 164/1024 [MB] (23 MBps) Copying: 188/1024 [MB] (23 MBps) Copying: 212/1024 [MB] (23 MBps) Copying: 234/1024 [MB] (22 MBps) Copying: 258/1024 [MB] (23 MBps) Copying: 281/1024 [MB] (23 MBps) Copying: 305/1024 [MB] (23 MBps) Copying: 327/1024 [MB] (22 MBps) Copying: 350/1024 [MB] (23 MBps) Copying: 373/1024 [MB] (22 MBps) Copying: 396/1024 [MB] (22 MBps) Copying: 419/1024 [MB] (23 MBps) Copying: 442/1024 [MB] (22 MBps) Copying: 465/1024 [MB] (22 MBps) Copying: 488/1024 [MB] (22 MBps) Copying: 511/1024 [MB] (23 MBps) Copying: 534/1024 [MB] (22 MBps) Copying: 557/1024 [MB] (23 MBps) Copying: 581/1024 [MB] (23 MBps) Copying: 604/1024 [MB] (23 MBps) Copying: 627/1024 [MB] (23 MBps) Copying: 650/1024 [MB] (23 MBps) Copying: 674/1024 [MB] (23 MBps) Copying: 697/1024 [MB] (23 MBps) Copying: 721/1024 [MB] (23 MBps) Copying: 744/1024 [MB] (23 MBps) Copying: 767/1024 [MB] (22 MBps) Copying: 790/1024 [MB] (23 MBps) Copying: 813/1024 [MB] (23 MBps) Copying: 836/1024 [MB] (22 MBps) Copying: 859/1024 [MB] (23 MBps) Copying: 882/1024 [MB] (23 MBps) Copying: 905/1024 [MB] (22 MBps) Copying: 928/1024 [MB] (22 MBps) Copying: 951/1024 [MB] (22 MBps) Copying: 974/1024 [MB] (22 MBps) Copying: 997/1024 [MB] (23 MBps) Copying: 1019/1024 [MB] (22 MBps) Copying: 1048340/1048576 [kB] (3916 kBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-07-14 21:31:44.800639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:33.372 [2024-07-14 21:31:44.800751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:33.372 [2024-07-14 21:31:44.800812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:32:33.372 [2024-07-14 21:31:44.800843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.372 [2024-07-14 21:31:44.803457] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:33.372 [2024-07-14 21:31:44.809251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:33.372 [2024-07-14 21:31:44.809283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:33.372 [2024-07-14 21:31:44.809296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.724 ms 00:32:33.372 [2024-07-14 21:31:44.809305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.372 [2024-07-14 21:31:44.818574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:33.372 [2024-07-14 21:31:44.818609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:33.372 [2024-07-14 21:31:44.818622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.941 ms 00:32:33.372 [2024-07-14 21:31:44.818639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.372 [2024-07-14 21:31:44.818669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:33.372 [2024-07-14 21:31:44.818680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:33.372 [2024-07-14 21:31:44.818690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:33.372 [2024-07-14 21:31:44.818698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.372 [2024-07-14 21:31:44.818744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:33.372 [2024-07-14 21:31:44.818756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:33.372 [2024-07-14 21:31:44.818765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:32:33.372 [2024-07-14 21:31:44.818774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.372 [2024-07-14 21:31:44.818792] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:33.372 [2024-07-14 21:31:44.818834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130304 / 261120 wr_cnt: 1 state: open 00:32:33.372 [2024-07-14 21:31:44.818862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:33.372 [2024-07-14 21:31:44.818872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:33.372 [2024-07-14 21:31:44.818882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:33.372 [2024-07-14 21:31:44.818892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:33.372 [2024-07-14 21:31:44.818901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:33.372 [2024-07-14 21:31:44.818910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:33.372 [2024-07-14 21:31:44.818920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:33.372 [2024-07-14 21:31:44.818930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:33.372 [2024-07-14 21:31:44.818939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:33.372 [2024-07-14 21:31:44.818949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:33.372 [2024-07-14 21:31:44.818958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:33.372 [2024-07-14 21:31:44.818968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:33.372 [2024-07-14 21:31:44.818978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:33.372 [2024-07-14 21:31:44.818987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:33.372 [2024-07-14 21:31:44.818997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:33.372 [2024-07-14 21:31:44.819006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:33.372 [2024-07-14 21:31:44.819015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:33.372 [2024-07-14 21:31:44.819025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:33.372 [2024-07-14 21:31:44.819035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:33.372 [2024-07-14 21:31:44.819047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:33.372 [2024-07-14 21:31:44.819057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:33.372 [2024-07-14 21:31:44.819067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:33.372 [2024-07-14 21:31:44.819076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:33.373 [2024-07-14 21:31:44.819831] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:33.373 [2024-07-14 21:31:44.819840] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0deb0b22-1569-42dd-9079-fdc56cfdd0ab 00:32:33.373 [2024-07-14 21:31:44.819849] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130304 00:32:33.374 [2024-07-14 21:31:44.819869] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130336 00:32:33.374 [2024-07-14 21:31:44.819878] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130304 00:32:33.374 [2024-07-14 21:31:44.819903] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:32:33.374 [2024-07-14 21:31:44.819911] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:33.374 [2024-07-14 21:31:44.819920] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:33.374 [2024-07-14 21:31:44.819930] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:33.374 [2024-07-14 21:31:44.819939] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:33.374 [2024-07-14 21:31:44.819947] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:33.374 [2024-07-14 21:31:44.819955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:33.374 [2024-07-14 21:31:44.819964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:33.374 [2024-07-14 21:31:44.819978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.164 ms 00:32:33.374 [2024-07-14 21:31:44.819986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.374 [2024-07-14 21:31:44.833124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:33.374 [2024-07-14 21:31:44.833153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:33.374 [2024-07-14 21:31:44.833166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.118 ms 00:32:33.374 [2024-07-14 21:31:44.833175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.374 [2024-07-14 21:31:44.833540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:33.374 [2024-07-14 21:31:44.833559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:33.374 [2024-07-14 21:31:44.833570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:32:33.374 [2024-07-14 21:31:44.833579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.374 [2024-07-14 21:31:44.861847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:33.374 [2024-07-14 21:31:44.861880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:33.374 [2024-07-14 21:31:44.861893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:33.374 [2024-07-14 21:31:44.861901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.374 [2024-07-14 21:31:44.861953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:33.374 [2024-07-14 21:31:44.861965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:33.374 [2024-07-14 21:31:44.861974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:33.374 [2024-07-14 21:31:44.861983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.374 [2024-07-14 21:31:44.862038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:33.374 [2024-07-14 21:31:44.862055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:33.374 [2024-07-14 21:31:44.862064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:33.374 [2024-07-14 21:31:44.862073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.374 [2024-07-14 21:31:44.862095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:33.374 [2024-07-14 21:31:44.862107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:33.374 [2024-07-14 21:31:44.862116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:33.374 [2024-07-14 21:31:44.862124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.632 [2024-07-14 21:31:44.937285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:33.632 [2024-07-14 21:31:44.937337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:33.632 [2024-07-14 21:31:44.937352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:33.632 [2024-07-14 21:31:44.937361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.632 [2024-07-14 21:31:45.001833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:33.632 [2024-07-14 21:31:45.001875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:33.632 [2024-07-14 21:31:45.001890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:33.632 [2024-07-14 21:31:45.001898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.632 [2024-07-14 21:31:45.001968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:33.632 [2024-07-14 21:31:45.001982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:33.632 [2024-07-14 21:31:45.001992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:33.632 [2024-07-14 21:31:45.002000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.632 [2024-07-14 21:31:45.002036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:33.632 [2024-07-14 21:31:45.002047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:33.632 [2024-07-14 21:31:45.002063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:33.632 [2024-07-14 21:31:45.002072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.632 [2024-07-14 21:31:45.002186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:33.632 [2024-07-14 21:31:45.002209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:33.632 [2024-07-14 21:31:45.002219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:33.632 [2024-07-14 21:31:45.002228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.632 [2024-07-14 21:31:45.002261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:33.633 [2024-07-14 21:31:45.002276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:33.633 [2024-07-14 21:31:45.002291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:33.633 [2024-07-14 21:31:45.002300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.633 [2024-07-14 21:31:45.002337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:33.633 [2024-07-14 21:31:45.002349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:33.633 [2024-07-14 21:31:45.002359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:33.633 [2024-07-14 21:31:45.002368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.633 [2024-07-14 21:31:45.002411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:33.633 [2024-07-14 21:31:45.002425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:33.633 [2024-07-14 21:31:45.002440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:33.633 [2024-07-14 21:31:45.002448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.633 [2024-07-14 21:31:45.002586] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 204.177 ms, result 0 00:32:35.006 00:32:35.006 00:32:35.006 21:31:46 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:32:35.006 [2024-07-14 21:31:46.430715] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:35.006 [2024-07-14 21:31:46.430922] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88049 ] 00:32:35.263 [2024-07-14 21:31:46.606953] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.263 [2024-07-14 21:31:46.755938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.519 [2024-07-14 21:31:47.003439] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:35.519 [2024-07-14 21:31:47.003519] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:35.779 [2024-07-14 21:31:47.159278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.779 [2024-07-14 21:31:47.159341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:35.779 [2024-07-14 21:31:47.159374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:35.779 [2024-07-14 21:31:47.159384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.779 [2024-07-14 21:31:47.159446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.779 [2024-07-14 21:31:47.159465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:35.779 [2024-07-14 21:31:47.159476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:35.779 [2024-07-14 21:31:47.159489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.779 [2024-07-14 21:31:47.159517] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:35.779 [2024-07-14 21:31:47.160363] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:35.779 [2024-07-14 21:31:47.160400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.779 [2024-07-14 21:31:47.160417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:35.779 [2024-07-14 21:31:47.160428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.890 ms 00:32:35.779 [2024-07-14 21:31:47.160438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.779 [2024-07-14 21:31:47.160988] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:35.779 [2024-07-14 21:31:47.161034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.779 [2024-07-14 21:31:47.161046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:35.779 [2024-07-14 21:31:47.161058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:32:35.779 [2024-07-14 21:31:47.161074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.779 [2024-07-14 21:31:47.161127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.779 [2024-07-14 21:31:47.161142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:35.779 [2024-07-14 21:31:47.161153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:32:35.779 [2024-07-14 21:31:47.161162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.779 [2024-07-14 21:31:47.161530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.779 [2024-07-14 21:31:47.161556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:35.779 [2024-07-14 21:31:47.161569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:32:35.779 [2024-07-14 21:31:47.161584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.779 [2024-07-14 21:31:47.161661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.779 [2024-07-14 21:31:47.161684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:35.779 [2024-07-14 21:31:47.161695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:32:35.779 [2024-07-14 21:31:47.161706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.779 [2024-07-14 21:31:47.161738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.779 [2024-07-14 21:31:47.161752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:35.779 [2024-07-14 21:31:47.161763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:35.779 [2024-07-14 21:31:47.161772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.779 [2024-07-14 21:31:47.161837] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:35.779 [2024-07-14 21:31:47.165895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.779 [2024-07-14 21:31:47.165927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:35.779 [2024-07-14 21:31:47.165962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.079 ms 00:32:35.779 [2024-07-14 21:31:47.165972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.779 [2024-07-14 21:31:47.166011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.779 [2024-07-14 21:31:47.166026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:35.779 [2024-07-14 21:31:47.166036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:35.779 [2024-07-14 21:31:47.166046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.779 [2024-07-14 21:31:47.166104] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:35.779 [2024-07-14 21:31:47.166132] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:35.779 [2024-07-14 21:31:47.166170] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:35.779 [2024-07-14 21:31:47.166192] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:32:35.779 [2024-07-14 21:31:47.166283] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:35.779 [2024-07-14 21:31:47.166298] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:35.779 [2024-07-14 21:31:47.166310] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:32:35.779 [2024-07-14 21:31:47.166323] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:35.779 [2024-07-14 21:31:47.166336] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:35.779 [2024-07-14 21:31:47.166346] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:35.779 [2024-07-14 21:31:47.166356] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:35.779 [2024-07-14 21:31:47.166365] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:35.779 [2024-07-14 21:31:47.166379] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:35.779 [2024-07-14 21:31:47.166389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.779 [2024-07-14 21:31:47.166399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:35.779 [2024-07-14 21:31:47.166409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:32:35.779 [2024-07-14 21:31:47.166418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.779 [2024-07-14 21:31:47.166520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.779 [2024-07-14 21:31:47.166535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:35.779 [2024-07-14 21:31:47.166545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:32:35.779 [2024-07-14 21:31:47.166555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.779 [2024-07-14 21:31:47.166655] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:35.779 [2024-07-14 21:31:47.166671] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:35.779 [2024-07-14 21:31:47.166683] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:35.779 [2024-07-14 21:31:47.166693] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:35.779 [2024-07-14 21:31:47.166704] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:35.779 [2024-07-14 21:31:47.166713] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:35.779 [2024-07-14 21:31:47.166723] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:35.779 [2024-07-14 21:31:47.166732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:35.779 [2024-07-14 21:31:47.166742] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:35.779 [2024-07-14 21:31:47.166752] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:35.779 [2024-07-14 21:31:47.166761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:35.779 [2024-07-14 21:31:47.166770] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:35.779 [2024-07-14 21:31:47.166780] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:35.779 [2024-07-14 21:31:47.166789] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:35.779 [2024-07-14 21:31:47.166799] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:35.779 [2024-07-14 21:31:47.166808] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:35.779 [2024-07-14 21:31:47.166818] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:35.779 [2024-07-14 21:31:47.166843] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:35.779 [2024-07-14 21:31:47.166870] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:35.779 [2024-07-14 21:31:47.166880] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:35.779 [2024-07-14 21:31:47.166889] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:35.779 [2024-07-14 21:31:47.166898] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:35.780 [2024-07-14 21:31:47.166920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:35.780 [2024-07-14 21:31:47.166929] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:35.780 [2024-07-14 21:31:47.166939] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:35.780 [2024-07-14 21:31:47.166948] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:35.780 [2024-07-14 21:31:47.166957] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:35.780 [2024-07-14 21:31:47.166966] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:35.780 [2024-07-14 21:31:47.166975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:35.780 [2024-07-14 21:31:47.166985] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:35.780 [2024-07-14 21:31:47.166994] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:35.780 [2024-07-14 21:31:47.167003] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:35.780 [2024-07-14 21:31:47.167013] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:35.780 [2024-07-14 21:31:47.167022] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:35.780 [2024-07-14 21:31:47.167031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:35.780 [2024-07-14 21:31:47.167040] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:35.780 [2024-07-14 21:31:47.167049] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:35.780 [2024-07-14 21:31:47.167058] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:35.780 [2024-07-14 21:31:47.167067] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:35.780 [2024-07-14 21:31:47.167076] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:35.780 [2024-07-14 21:31:47.167085] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:35.780 [2024-07-14 21:31:47.167094] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:35.780 [2024-07-14 21:31:47.167103] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:35.780 [2024-07-14 21:31:47.167112] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:35.780 [2024-07-14 21:31:47.167122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:35.780 [2024-07-14 21:31:47.167132] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:35.780 [2024-07-14 21:31:47.167141] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:35.780 [2024-07-14 21:31:47.167151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:35.780 [2024-07-14 21:31:47.167161] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:35.780 [2024-07-14 21:31:47.167171] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:35.780 [2024-07-14 21:31:47.167180] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:35.780 [2024-07-14 21:31:47.167189] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:35.780 [2024-07-14 21:31:47.167198] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:35.780 [2024-07-14 21:31:47.167209] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:35.780 [2024-07-14 21:31:47.167221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:35.780 [2024-07-14 21:31:47.167233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:35.780 [2024-07-14 21:31:47.167242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:35.780 [2024-07-14 21:31:47.167252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:35.780 [2024-07-14 21:31:47.167262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:35.780 [2024-07-14 21:31:47.167272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:35.780 [2024-07-14 21:31:47.167281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:35.780 [2024-07-14 21:31:47.167291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:35.780 [2024-07-14 21:31:47.167301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:35.780 [2024-07-14 21:31:47.167311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:35.780 [2024-07-14 21:31:47.167321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:35.780 [2024-07-14 21:31:47.167331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:35.780 [2024-07-14 21:31:47.167340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:35.780 [2024-07-14 21:31:47.167350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:35.780 [2024-07-14 21:31:47.167360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:35.780 [2024-07-14 21:31:47.167370] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:35.780 [2024-07-14 21:31:47.167385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:35.780 [2024-07-14 21:31:47.167396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:35.780 [2024-07-14 21:31:47.167406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:35.780 [2024-07-14 21:31:47.167416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:35.780 [2024-07-14 21:31:47.167425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:35.780 [2024-07-14 21:31:47.167436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.780 [2024-07-14 21:31:47.167446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:35.780 [2024-07-14 21:31:47.167456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.843 ms 00:32:35.780 [2024-07-14 21:31:47.167466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.780 [2024-07-14 21:31:47.199226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.780 [2024-07-14 21:31:47.199286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:35.780 [2024-07-14 21:31:47.199319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.709 ms 00:32:35.780 [2024-07-14 21:31:47.199330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.780 [2024-07-14 21:31:47.199425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.780 [2024-07-14 21:31:47.199440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:35.780 [2024-07-14 21:31:47.199451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:32:35.780 [2024-07-14 21:31:47.199460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.780 [2024-07-14 21:31:47.230559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.780 [2024-07-14 21:31:47.230619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:35.780 [2024-07-14 21:31:47.230651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.995 ms 00:32:35.780 [2024-07-14 21:31:47.230661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.780 [2024-07-14 21:31:47.230711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.780 [2024-07-14 21:31:47.230726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:35.780 [2024-07-14 21:31:47.230742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:35.780 [2024-07-14 21:31:47.230752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.780 [2024-07-14 21:31:47.230927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.780 [2024-07-14 21:31:47.230944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:35.780 [2024-07-14 21:31:47.230956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:32:35.780 [2024-07-14 21:31:47.230966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.780 [2024-07-14 21:31:47.231104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.780 [2024-07-14 21:31:47.231121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:35.780 [2024-07-14 21:31:47.231132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:32:35.780 [2024-07-14 21:31:47.231145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.780 [2024-07-14 21:31:47.244783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.780 [2024-07-14 21:31:47.244843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:35.780 [2024-07-14 21:31:47.244863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.613 ms 00:32:35.780 [2024-07-14 21:31:47.244873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.780 [2024-07-14 21:31:47.245030] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:32:35.780 [2024-07-14 21:31:47.245067] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:35.780 [2024-07-14 21:31:47.245087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.780 [2024-07-14 21:31:47.245113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:35.780 [2024-07-14 21:31:47.245124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:32:35.780 [2024-07-14 21:31:47.245134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.780 [2024-07-14 21:31:47.256237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.780 [2024-07-14 21:31:47.256281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:35.780 [2024-07-14 21:31:47.256310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.080 ms 00:32:35.780 [2024-07-14 21:31:47.256319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.780 [2024-07-14 21:31:47.256421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.780 [2024-07-14 21:31:47.256435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:35.780 [2024-07-14 21:31:47.256445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:32:35.780 [2024-07-14 21:31:47.256454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.780 [2024-07-14 21:31:47.256519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.780 [2024-07-14 21:31:47.256550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:35.780 [2024-07-14 21:31:47.256566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:35.780 [2024-07-14 21:31:47.256575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.780 [2024-07-14 21:31:47.257283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.780 [2024-07-14 21:31:47.257310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:35.780 [2024-07-14 21:31:47.257322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:32:35.780 [2024-07-14 21:31:47.257332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.780 [2024-07-14 21:31:47.257357] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:35.780 [2024-07-14 21:31:47.257371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.781 [2024-07-14 21:31:47.257381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:35.781 [2024-07-14 21:31:47.257405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:32:35.781 [2024-07-14 21:31:47.257415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.781 [2024-07-14 21:31:47.268459] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:35.781 [2024-07-14 21:31:47.268722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.781 [2024-07-14 21:31:47.268741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:35.781 [2024-07-14 21:31:47.268754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.285 ms 00:32:35.781 [2024-07-14 21:31:47.268779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.781 [2024-07-14 21:31:47.270845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.781 [2024-07-14 21:31:47.270894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:35.781 [2024-07-14 21:31:47.270923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.024 ms 00:32:35.781 [2024-07-14 21:31:47.270938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.781 [2024-07-14 21:31:47.271010] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:32:35.781 [2024-07-14 21:31:47.271491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.781 [2024-07-14 21:31:47.271521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:35.781 [2024-07-14 21:31:47.271534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:32:35.781 [2024-07-14 21:31:47.271544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.781 [2024-07-14 21:31:47.271575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.781 [2024-07-14 21:31:47.271590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:35.781 [2024-07-14 21:31:47.271601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:35.781 [2024-07-14 21:31:47.271615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.781 [2024-07-14 21:31:47.271648] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:35.781 [2024-07-14 21:31:47.271663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.781 [2024-07-14 21:31:47.271673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:35.781 [2024-07-14 21:31:47.271683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:32:35.781 [2024-07-14 21:31:47.271692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.781 [2024-07-14 21:31:47.298504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.781 [2024-07-14 21:31:47.298558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:35.781 [2024-07-14 21:31:47.298595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.791 ms 00:32:35.781 [2024-07-14 21:31:47.298605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.781 [2024-07-14 21:31:47.298675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.781 [2024-07-14 21:31:47.298692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:35.781 [2024-07-14 21:31:47.298702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:32:35.781 [2024-07-14 21:31:47.298711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.781 [2024-07-14 21:31:47.309007] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 146.629 ms, result 0 00:33:20.677  Copying: 24/1024 [MB] (24 MBps) Copying: 48/1024 [MB] (23 MBps) Copying: 71/1024 [MB] (23 MBps) Copying: 94/1024 [MB] (23 MBps) Copying: 117/1024 [MB] (22 MBps) Copying: 140/1024 [MB] (23 MBps) Copying: 163/1024 [MB] (23 MBps) Copying: 187/1024 [MB] (23 MBps) Copying: 210/1024 [MB] (23 MBps) Copying: 233/1024 [MB] (22 MBps) Copying: 256/1024 [MB] (23 MBps) Copying: 280/1024 [MB] (23 MBps) Copying: 303/1024 [MB] (23 MBps) Copying: 327/1024 [MB] (23 MBps) Copying: 350/1024 [MB] (23 MBps) Copying: 373/1024 [MB] (22 MBps) Copying: 396/1024 [MB] (23 MBps) Copying: 419/1024 [MB] (23 MBps) Copying: 442/1024 [MB] (22 MBps) Copying: 465/1024 [MB] (23 MBps) Copying: 488/1024 [MB] (23 MBps) Copying: 511/1024 [MB] (23 MBps) Copying: 534/1024 [MB] (22 MBps) Copying: 556/1024 [MB] (22 MBps) Copying: 579/1024 [MB] (22 MBps) Copying: 602/1024 [MB] (22 MBps) Copying: 624/1024 [MB] (22 MBps) Copying: 647/1024 [MB] (22 MBps) Copying: 670/1024 [MB] (22 MBps) Copying: 693/1024 [MB] (23 MBps) Copying: 716/1024 [MB] (23 MBps) Copying: 739/1024 [MB] (22 MBps) Copying: 761/1024 [MB] (22 MBps) Copying: 785/1024 [MB] (23 MBps) Copying: 807/1024 [MB] (22 MBps) Copying: 830/1024 [MB] (22 MBps) Copying: 853/1024 [MB] (22 MBps) Copying: 875/1024 [MB] (22 MBps) Copying: 898/1024 [MB] (22 MBps) Copying: 921/1024 [MB] (22 MBps) Copying: 944/1024 [MB] (22 MBps) Copying: 967/1024 [MB] (23 MBps) Copying: 990/1024 [MB] (23 MBps) Copying: 1013/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-14 21:32:32.110267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.677 [2024-07-14 21:32:32.110357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:20.677 [2024-07-14 21:32:32.110415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:20.677 [2024-07-14 21:32:32.110426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.677 [2024-07-14 21:32:32.110457] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:20.677 [2024-07-14 21:32:32.113337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.677 [2024-07-14 21:32:32.113367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:20.677 [2024-07-14 21:32:32.113396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.860 ms 00:33:20.677 [2024-07-14 21:32:32.113406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.677 [2024-07-14 21:32:32.113630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.677 [2024-07-14 21:32:32.113646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:20.677 [2024-07-14 21:32:32.113664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:33:20.677 [2024-07-14 21:32:32.113674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.677 [2024-07-14 21:32:32.113705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.677 [2024-07-14 21:32:32.113718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:20.677 [2024-07-14 21:32:32.113728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:20.677 [2024-07-14 21:32:32.113738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.677 [2024-07-14 21:32:32.113791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.677 [2024-07-14 21:32:32.113805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:20.677 [2024-07-14 21:32:32.113815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:33:20.677 [2024-07-14 21:32:32.114091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.677 [2024-07-14 21:32:32.114166] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:20.677 [2024-07-14 21:32:32.114216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:33:20.677 [2024-07-14 21:32:32.114346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:20.677 [2024-07-14 21:32:32.114410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:20.677 [2024-07-14 21:32:32.114462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:20.677 [2024-07-14 21:32:32.114582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:20.677 [2024-07-14 21:32:32.114639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:20.677 [2024-07-14 21:32:32.114690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:20.677 [2024-07-14 21:32:32.114834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:20.677 [2024-07-14 21:32:32.114888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:20.677 [2024-07-14 21:32:32.115069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:20.677 [2024-07-14 21:32:32.115192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:20.677 [2024-07-14 21:32:32.115387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.115444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.115568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.115679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.115734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.115912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.115969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.116019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.116150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.116200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.116249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.116359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.116425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.116476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.116585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.116668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.116785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.117043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.117102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.118100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.118293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.118419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.118652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.118712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.119997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.120007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.120017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:20.678 [2024-07-14 21:32:32.120035] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:20.678 [2024-07-14 21:32:32.120046] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0deb0b22-1569-42dd-9079-fdc56cfdd0ab 00:33:20.678 [2024-07-14 21:32:32.120056] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:33:20.678 [2024-07-14 21:32:32.120065] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3616 00:33:20.678 [2024-07-14 21:32:32.120074] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3584 00:33:20.678 [2024-07-14 21:32:32.120085] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:33:20.678 [2024-07-14 21:32:32.120095] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:20.678 [2024-07-14 21:32:32.120105] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:20.679 [2024-07-14 21:32:32.120115] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:20.679 [2024-07-14 21:32:32.120124] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:20.679 [2024-07-14 21:32:32.120133] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:20.679 [2024-07-14 21:32:32.120143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.679 [2024-07-14 21:32:32.120158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:20.679 [2024-07-14 21:32:32.120169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.979 ms 00:33:20.679 [2024-07-14 21:32:32.120179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.679 [2024-07-14 21:32:32.134058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.679 [2024-07-14 21:32:32.134261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:20.679 [2024-07-14 21:32:32.134400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.853 ms 00:33:20.679 [2024-07-14 21:32:32.134512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.679 [2024-07-14 21:32:32.135016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.679 [2024-07-14 21:32:32.135170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:20.679 [2024-07-14 21:32:32.135322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:33:20.679 [2024-07-14 21:32:32.135367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.679 [2024-07-14 21:32:32.164321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:20.679 [2024-07-14 21:32:32.164483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:20.679 [2024-07-14 21:32:32.164588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:20.679 [2024-07-14 21:32:32.164686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.679 [2024-07-14 21:32:32.164769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:20.679 [2024-07-14 21:32:32.164951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:20.679 [2024-07-14 21:32:32.165001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:20.679 [2024-07-14 21:32:32.165100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.679 [2024-07-14 21:32:32.165216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:20.679 [2024-07-14 21:32:32.165290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:20.679 [2024-07-14 21:32:32.165441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:20.679 [2024-07-14 21:32:32.165487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.679 [2024-07-14 21:32:32.165544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:20.679 [2024-07-14 21:32:32.165687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:20.679 [2024-07-14 21:32:32.165735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:20.679 [2024-07-14 21:32:32.165768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.937 [2024-07-14 21:32:32.241625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:20.937 [2024-07-14 21:32:32.241910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:20.937 [2024-07-14 21:32:32.242036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:20.937 [2024-07-14 21:32:32.242092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.937 [2024-07-14 21:32:32.306499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:20.937 [2024-07-14 21:32:32.306698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:20.937 [2024-07-14 21:32:32.306833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:20.937 [2024-07-14 21:32:32.306885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.937 [2024-07-14 21:32:32.306982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:20.937 [2024-07-14 21:32:32.307079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:20.937 [2024-07-14 21:32:32.307130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:20.937 [2024-07-14 21:32:32.307162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.937 [2024-07-14 21:32:32.307224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:20.937 [2024-07-14 21:32:32.307246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:20.937 [2024-07-14 21:32:32.307257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:20.937 [2024-07-14 21:32:32.307267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.937 [2024-07-14 21:32:32.307357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:20.937 [2024-07-14 21:32:32.307375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:20.937 [2024-07-14 21:32:32.307387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:20.937 [2024-07-14 21:32:32.307396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.937 [2024-07-14 21:32:32.307430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:20.937 [2024-07-14 21:32:32.307445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:20.937 [2024-07-14 21:32:32.307460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:20.937 [2024-07-14 21:32:32.307470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.937 [2024-07-14 21:32:32.307508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:20.937 [2024-07-14 21:32:32.307522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:20.937 [2024-07-14 21:32:32.307532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:20.937 [2024-07-14 21:32:32.307541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.937 [2024-07-14 21:32:32.307588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:20.937 [2024-07-14 21:32:32.307608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:20.937 [2024-07-14 21:32:32.307618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:20.937 [2024-07-14 21:32:32.307628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.937 [2024-07-14 21:32:32.307791] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 197.467 ms, result 0 00:33:21.867 00:33:21.867 00:33:21.867 21:32:33 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:23.764 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:23.764 21:32:34 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:23.764 21:32:34 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:33:23.764 21:32:34 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:23.764 21:32:35 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:23.764 21:32:35 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:23.764 21:32:35 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 86464 00:33:23.764 21:32:35 ftl.ftl_restore_fast -- common/autotest_common.sh@948 -- # '[' -z 86464 ']' 00:33:23.764 21:32:35 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # kill -0 86464 00:33:23.764 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (86464) - No such process 00:33:23.764 Process with pid 86464 is not found 00:33:23.764 21:32:35 ftl.ftl_restore_fast -- common/autotest_common.sh@975 -- # echo 'Process with pid 86464 is not found' 00:33:23.764 21:32:35 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:33:23.764 Remove shared memory files 00:33:23.764 21:32:35 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:23.764 21:32:35 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:33:23.764 21:32:35 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_0deb0b22-1569-42dd-9079-fdc56cfdd0ab_band_md /dev/hugepages/ftl_0deb0b22-1569-42dd-9079-fdc56cfdd0ab_l2p_l1 /dev/hugepages/ftl_0deb0b22-1569-42dd-9079-fdc56cfdd0ab_l2p_l2 /dev/hugepages/ftl_0deb0b22-1569-42dd-9079-fdc56cfdd0ab_l2p_l2_ctx /dev/hugepages/ftl_0deb0b22-1569-42dd-9079-fdc56cfdd0ab_nvc_md /dev/hugepages/ftl_0deb0b22-1569-42dd-9079-fdc56cfdd0ab_p2l_pool /dev/hugepages/ftl_0deb0b22-1569-42dd-9079-fdc56cfdd0ab_sb /dev/hugepages/ftl_0deb0b22-1569-42dd-9079-fdc56cfdd0ab_sb_shm /dev/hugepages/ftl_0deb0b22-1569-42dd-9079-fdc56cfdd0ab_trim_bitmap /dev/hugepages/ftl_0deb0b22-1569-42dd-9079-fdc56cfdd0ab_trim_log /dev/hugepages/ftl_0deb0b22-1569-42dd-9079-fdc56cfdd0ab_trim_md /dev/hugepages/ftl_0deb0b22-1569-42dd-9079-fdc56cfdd0ab_vmap 00:33:23.764 21:32:35 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:33:23.764 21:32:35 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:23.764 21:32:35 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:33:23.764 00:33:23.764 real 3m30.364s 00:33:23.764 user 3m17.489s 00:33:23.764 sys 0m14.487s 00:33:23.764 21:32:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:23.764 21:32:35 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:33:23.764 ************************************ 00:33:23.764 END TEST ftl_restore_fast 00:33:23.764 ************************************ 00:33:23.764 21:32:35 ftl -- common/autotest_common.sh@1142 -- # return 0 00:33:23.764 21:32:35 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:23.764 21:32:35 ftl -- ftl/ftl.sh@14 -- # killprocess 78630 00:33:23.765 21:32:35 ftl -- common/autotest_common.sh@948 -- # '[' -z 78630 ']' 00:33:23.765 21:32:35 ftl -- common/autotest_common.sh@952 -- # kill -0 78630 00:33:23.765 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (78630) - No such process 00:33:23.765 Process with pid 78630 is not found 00:33:23.765 21:32:35 ftl -- common/autotest_common.sh@975 -- # echo 'Process with pid 78630 is not found' 00:33:23.765 21:32:35 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:23.765 21:32:35 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=88536 00:33:23.765 21:32:35 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:23.765 21:32:35 ftl -- ftl/ftl.sh@20 -- # waitforlisten 88536 00:33:23.765 21:32:35 ftl -- common/autotest_common.sh@829 -- # '[' -z 88536 ']' 00:33:23.765 21:32:35 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:23.765 21:32:35 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:23.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:23.765 21:32:35 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:23.765 21:32:35 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:23.765 21:32:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:23.765 [2024-07-14 21:32:35.205779] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:23.765 [2024-07-14 21:32:35.206634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88536 ] 00:33:24.023 [2024-07-14 21:32:35.380464] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.281 [2024-07-14 21:32:35.582044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.847 21:32:36 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:24.847 21:32:36 ftl -- common/autotest_common.sh@862 -- # return 0 00:33:24.847 21:32:36 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:25.105 nvme0n1 00:33:25.105 21:32:36 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:25.105 21:32:36 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:25.105 21:32:36 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:25.362 21:32:36 ftl -- ftl/common.sh@28 -- # stores=49cf1a6a-a117-46fd-b79d-9ee9d265f514 00:33:25.362 21:32:36 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:25.362 21:32:36 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 49cf1a6a-a117-46fd-b79d-9ee9d265f514 00:33:25.644 21:32:36 ftl -- ftl/ftl.sh@23 -- # killprocess 88536 00:33:25.644 21:32:36 ftl -- common/autotest_common.sh@948 -- # '[' -z 88536 ']' 00:33:25.644 21:32:36 ftl -- common/autotest_common.sh@952 -- # kill -0 88536 00:33:25.644 21:32:36 ftl -- common/autotest_common.sh@953 -- # uname 00:33:25.644 21:32:36 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:25.644 21:32:36 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88536 00:33:25.644 21:32:36 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:25.644 21:32:36 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:25.644 killing process with pid 88536 00:33:25.644 21:32:36 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88536' 00:33:25.644 21:32:36 ftl -- common/autotest_common.sh@967 -- # kill 88536 00:33:25.644 21:32:36 ftl -- common/autotest_common.sh@972 -- # wait 88536 00:33:27.547 21:32:38 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:27.547 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:27.547 Waiting for block devices as requested 00:33:27.547 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:27.547 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:27.547 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:27.805 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:33.140 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:33.140 21:32:44 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:33:33.140 Remove shared memory files 00:33:33.140 21:32:44 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:33.140 21:32:44 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:33:33.140 21:32:44 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:33:33.140 21:32:44 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:33:33.140 21:32:44 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:33.140 21:32:44 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:33:33.140 ************************************ 00:33:33.140 END TEST ftl 00:33:33.140 ************************************ 00:33:33.140 00:33:33.140 real 15m18.009s 00:33:33.140 user 18m2.147s 00:33:33.140 sys 1m40.595s 00:33:33.140 21:32:44 ftl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:33.140 21:32:44 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:33.140 21:32:44 -- common/autotest_common.sh@1142 -- # return 0 00:33:33.140 21:32:44 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:33.141 21:32:44 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:33:33.141 21:32:44 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:33:33.141 21:32:44 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:33:33.141 21:32:44 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:33:33.141 21:32:44 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:33.141 21:32:44 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:33.141 21:32:44 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:33.141 21:32:44 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:33:33.141 21:32:44 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:33:33.141 21:32:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:33.141 21:32:44 -- common/autotest_common.sh@10 -- # set +x 00:33:33.141 21:32:44 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:33:33.141 21:32:44 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:33.141 21:32:44 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:33.141 21:32:44 -- common/autotest_common.sh@10 -- # set +x 00:33:34.516 INFO: APP EXITING 00:33:34.516 INFO: killing all VMs 00:33:34.516 INFO: killing vhost app 00:33:34.516 INFO: EXIT DONE 00:33:34.775 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:35.034 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:35.034 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:35.034 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:35.034 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:35.602 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:35.861 Cleaning 00:33:35.861 Removing: /var/run/dpdk/spdk0/config 00:33:35.861 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:35.861 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:35.861 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:35.861 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:35.861 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:35.861 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:35.861 Removing: /var/run/dpdk/spdk0 00:33:35.861 Removing: /var/run/dpdk/spdk_pid61741 00:33:35.861 Removing: /var/run/dpdk/spdk_pid61946 00:33:35.861 Removing: /var/run/dpdk/spdk_pid62162 00:33:35.861 Removing: /var/run/dpdk/spdk_pid62260 00:33:35.861 Removing: /var/run/dpdk/spdk_pid62305 00:33:35.861 Removing: /var/run/dpdk/spdk_pid62439 00:33:35.861 Removing: /var/run/dpdk/spdk_pid62457 00:33:35.861 Removing: /var/run/dpdk/spdk_pid62632 00:33:35.861 Removing: /var/run/dpdk/spdk_pid62723 00:33:35.861 Removing: /var/run/dpdk/spdk_pid62817 00:33:35.861 Removing: /var/run/dpdk/spdk_pid62920 00:33:35.861 Removing: /var/run/dpdk/spdk_pid63009 00:33:35.861 Removing: /var/run/dpdk/spdk_pid63054 00:33:35.861 Removing: /var/run/dpdk/spdk_pid63095 00:33:35.861 Removing: /var/run/dpdk/spdk_pid63153 00:33:35.861 Removing: /var/run/dpdk/spdk_pid63270 00:33:35.861 Removing: /var/run/dpdk/spdk_pid63717 00:33:35.861 Removing: /var/run/dpdk/spdk_pid63781 00:33:35.861 Removing: /var/run/dpdk/spdk_pid63857 00:33:35.862 Removing: /var/run/dpdk/spdk_pid63873 00:33:35.862 Removing: /var/run/dpdk/spdk_pid63987 00:33:35.862 Removing: /var/run/dpdk/spdk_pid64003 00:33:35.862 Removing: /var/run/dpdk/spdk_pid64127 00:33:35.862 Removing: /var/run/dpdk/spdk_pid64143 00:33:35.862 Removing: /var/run/dpdk/spdk_pid64202 00:33:35.862 Removing: /var/run/dpdk/spdk_pid64220 00:33:35.862 Removing: /var/run/dpdk/spdk_pid64284 00:33:35.862 Removing: /var/run/dpdk/spdk_pid64302 00:33:35.862 Removing: /var/run/dpdk/spdk_pid64466 00:33:35.862 Removing: /var/run/dpdk/spdk_pid64508 00:33:35.862 Removing: /var/run/dpdk/spdk_pid64589 00:33:35.862 Removing: /var/run/dpdk/spdk_pid64659 00:33:35.862 Removing: /var/run/dpdk/spdk_pid64690 00:33:35.862 Removing: /var/run/dpdk/spdk_pid64763 00:33:35.862 Removing: /var/run/dpdk/spdk_pid64809 00:33:35.862 Removing: /var/run/dpdk/spdk_pid64850 00:33:35.862 Removing: /var/run/dpdk/spdk_pid64891 00:33:35.862 Removing: /var/run/dpdk/spdk_pid64938 00:33:35.862 Removing: /var/run/dpdk/spdk_pid64979 00:33:36.122 Removing: /var/run/dpdk/spdk_pid65024 00:33:36.122 Removing: /var/run/dpdk/spdk_pid65072 00:33:36.122 Removing: /var/run/dpdk/spdk_pid65113 00:33:36.122 Removing: /var/run/dpdk/spdk_pid65154 00:33:36.122 Removing: /var/run/dpdk/spdk_pid65195 00:33:36.122 Removing: /var/run/dpdk/spdk_pid65243 00:33:36.122 Removing: /var/run/dpdk/spdk_pid65288 00:33:36.122 Removing: /var/run/dpdk/spdk_pid65329 00:33:36.122 Removing: /var/run/dpdk/spdk_pid65370 00:33:36.122 Removing: /var/run/dpdk/spdk_pid65417 00:33:36.122 Removing: /var/run/dpdk/spdk_pid65458 00:33:36.122 Removing: /var/run/dpdk/spdk_pid65506 00:33:36.122 Removing: /var/run/dpdk/spdk_pid65557 00:33:36.122 Removing: /var/run/dpdk/spdk_pid65598 00:33:36.122 Removing: /var/run/dpdk/spdk_pid65640 00:33:36.122 Removing: /var/run/dpdk/spdk_pid65722 00:33:36.122 Removing: /var/run/dpdk/spdk_pid65833 00:33:36.122 Removing: /var/run/dpdk/spdk_pid65994 00:33:36.122 Removing: /var/run/dpdk/spdk_pid66084 00:33:36.122 Removing: /var/run/dpdk/spdk_pid66126 00:33:36.122 Removing: /var/run/dpdk/spdk_pid66581 00:33:36.122 Removing: /var/run/dpdk/spdk_pid66674 00:33:36.122 Removing: /var/run/dpdk/spdk_pid66784 00:33:36.122 Removing: /var/run/dpdk/spdk_pid66847 00:33:36.122 Removing: /var/run/dpdk/spdk_pid66873 00:33:36.122 Removing: /var/run/dpdk/spdk_pid66949 00:33:36.122 Removing: /var/run/dpdk/spdk_pid67577 00:33:36.122 Removing: /var/run/dpdk/spdk_pid67619 00:33:36.122 Removing: /var/run/dpdk/spdk_pid68118 00:33:36.122 Removing: /var/run/dpdk/spdk_pid68216 00:33:36.122 Removing: /var/run/dpdk/spdk_pid68332 00:33:36.122 Removing: /var/run/dpdk/spdk_pid68385 00:33:36.122 Removing: /var/run/dpdk/spdk_pid68416 00:33:36.122 Removing: /var/run/dpdk/spdk_pid68442 00:33:36.122 Removing: /var/run/dpdk/spdk_pid70293 00:33:36.122 Removing: /var/run/dpdk/spdk_pid70430 00:33:36.122 Removing: /var/run/dpdk/spdk_pid70441 00:33:36.122 Removing: /var/run/dpdk/spdk_pid70457 00:33:36.122 Removing: /var/run/dpdk/spdk_pid70496 00:33:36.122 Removing: /var/run/dpdk/spdk_pid70500 00:33:36.122 Removing: /var/run/dpdk/spdk_pid70512 00:33:36.122 Removing: /var/run/dpdk/spdk_pid70558 00:33:36.122 Removing: /var/run/dpdk/spdk_pid70562 00:33:36.122 Removing: /var/run/dpdk/spdk_pid70574 00:33:36.122 Removing: /var/run/dpdk/spdk_pid70621 00:33:36.122 Removing: /var/run/dpdk/spdk_pid70625 00:33:36.122 Removing: /var/run/dpdk/spdk_pid70637 00:33:36.122 Removing: /var/run/dpdk/spdk_pid71993 00:33:36.122 Removing: /var/run/dpdk/spdk_pid72089 00:33:36.122 Removing: /var/run/dpdk/spdk_pid73522 00:33:36.122 Removing: /var/run/dpdk/spdk_pid74871 00:33:36.122 Removing: /var/run/dpdk/spdk_pid74986 00:33:36.122 Removing: /var/run/dpdk/spdk_pid75101 00:33:36.122 Removing: /var/run/dpdk/spdk_pid75205 00:33:36.122 Removing: /var/run/dpdk/spdk_pid75343 00:33:36.122 Removing: /var/run/dpdk/spdk_pid75417 00:33:36.122 Removing: /var/run/dpdk/spdk_pid75557 00:33:36.122 Removing: /var/run/dpdk/spdk_pid75922 00:33:36.122 Removing: /var/run/dpdk/spdk_pid75962 00:33:36.122 Removing: /var/run/dpdk/spdk_pid76423 00:33:36.122 Removing: /var/run/dpdk/spdk_pid76618 00:33:36.122 Removing: /var/run/dpdk/spdk_pid76717 00:33:36.122 Removing: /var/run/dpdk/spdk_pid76827 00:33:36.122 Removing: /var/run/dpdk/spdk_pid76881 00:33:36.122 Removing: /var/run/dpdk/spdk_pid76908 00:33:36.122 Removing: /var/run/dpdk/spdk_pid77198 00:33:36.122 Removing: /var/run/dpdk/spdk_pid77253 00:33:36.122 Removing: /var/run/dpdk/spdk_pid77326 00:33:36.122 Removing: /var/run/dpdk/spdk_pid77708 00:33:36.122 Removing: /var/run/dpdk/spdk_pid77849 00:33:36.122 Removing: /var/run/dpdk/spdk_pid78630 00:33:36.122 Removing: /var/run/dpdk/spdk_pid78760 00:33:36.122 Removing: /var/run/dpdk/spdk_pid78942 00:33:36.122 Removing: /var/run/dpdk/spdk_pid79045 00:33:36.122 Removing: /var/run/dpdk/spdk_pid79404 00:33:36.122 Removing: /var/run/dpdk/spdk_pid79673 00:33:36.122 Removing: /var/run/dpdk/spdk_pid80025 00:33:36.122 Removing: /var/run/dpdk/spdk_pid80220 00:33:36.122 Removing: /var/run/dpdk/spdk_pid80361 00:33:36.122 Removing: /var/run/dpdk/spdk_pid80421 00:33:36.122 Removing: /var/run/dpdk/spdk_pid80559 00:33:36.122 Removing: /var/run/dpdk/spdk_pid80594 00:33:36.122 Removing: /var/run/dpdk/spdk_pid80648 00:33:36.122 Removing: /var/run/dpdk/spdk_pid80850 00:33:36.122 Removing: /var/run/dpdk/spdk_pid81081 00:33:36.382 Removing: /var/run/dpdk/spdk_pid81521 00:33:36.382 Removing: /var/run/dpdk/spdk_pid81986 00:33:36.382 Removing: /var/run/dpdk/spdk_pid82443 00:33:36.382 Removing: /var/run/dpdk/spdk_pid82982 00:33:36.382 Removing: /var/run/dpdk/spdk_pid83119 00:33:36.382 Removing: /var/run/dpdk/spdk_pid83212 00:33:36.382 Removing: /var/run/dpdk/spdk_pid83914 00:33:36.382 Removing: /var/run/dpdk/spdk_pid83986 00:33:36.382 Removing: /var/run/dpdk/spdk_pid84457 00:33:36.382 Removing: /var/run/dpdk/spdk_pid84888 00:33:36.382 Removing: /var/run/dpdk/spdk_pid85399 00:33:36.382 Removing: /var/run/dpdk/spdk_pid85510 00:33:36.382 Removing: /var/run/dpdk/spdk_pid85559 00:33:36.382 Removing: /var/run/dpdk/spdk_pid85633 00:33:36.382 Removing: /var/run/dpdk/spdk_pid85697 00:33:36.382 Removing: /var/run/dpdk/spdk_pid85771 00:33:36.382 Removing: /var/run/dpdk/spdk_pid85985 00:33:36.382 Removing: /var/run/dpdk/spdk_pid86054 00:33:36.382 Removing: /var/run/dpdk/spdk_pid86127 00:33:36.382 Removing: /var/run/dpdk/spdk_pid86198 00:33:36.382 Removing: /var/run/dpdk/spdk_pid86234 00:33:36.382 Removing: /var/run/dpdk/spdk_pid86307 00:33:36.382 Removing: /var/run/dpdk/spdk_pid86464 00:33:36.382 Removing: /var/run/dpdk/spdk_pid86680 00:33:36.382 Removing: /var/run/dpdk/spdk_pid87124 00:33:36.382 Removing: /var/run/dpdk/spdk_pid87581 00:33:36.383 Removing: /var/run/dpdk/spdk_pid88049 00:33:36.383 Removing: /var/run/dpdk/spdk_pid88536 00:33:36.383 Clean 00:33:36.383 21:32:47 -- common/autotest_common.sh@1451 -- # return 0 00:33:36.383 21:32:47 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:33:36.383 21:32:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:36.383 21:32:47 -- common/autotest_common.sh@10 -- # set +x 00:33:36.383 21:32:47 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:33:36.383 21:32:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:36.383 21:32:47 -- common/autotest_common.sh@10 -- # set +x 00:33:36.383 21:32:47 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:36.383 21:32:47 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:36.383 21:32:47 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:36.383 21:32:47 -- spdk/autotest.sh@391 -- # hash lcov 00:33:36.383 21:32:47 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:36.383 21:32:47 -- spdk/autotest.sh@393 -- # hostname 00:33:36.383 21:32:47 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:36.642 geninfo: WARNING: invalid characters removed from testname! 00:33:58.561 21:33:09 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:01.096 21:33:12 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:03.626 21:33:14 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:05.530 21:33:16 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:08.062 21:33:19 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:09.963 21:33:21 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:12.494 21:33:23 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:12.494 21:33:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:12.495 21:33:23 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:12.495 21:33:23 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:12.495 21:33:23 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:12.495 21:33:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.495 21:33:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.495 21:33:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.495 21:33:23 -- paths/export.sh@5 -- $ export PATH 00:34:12.495 21:33:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.495 21:33:23 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:34:12.495 21:33:23 -- common/autobuild_common.sh@444 -- $ date +%s 00:34:12.495 21:33:23 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720992803.XXXXXX 00:34:12.495 21:33:23 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720992803.32dYIK 00:34:12.495 21:33:23 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:34:12.495 21:33:23 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:34:12.495 21:33:23 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:34:12.495 21:33:23 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:34:12.495 21:33:23 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:34:12.495 21:33:23 -- common/autobuild_common.sh@460 -- $ get_config_params 00:34:12.495 21:33:23 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:34:12.495 21:33:23 -- common/autotest_common.sh@10 -- $ set +x 00:34:12.495 21:33:23 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:34:12.495 21:33:23 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:34:12.495 21:33:23 -- pm/common@17 -- $ local monitor 00:34:12.495 21:33:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:12.495 21:33:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:12.495 21:33:23 -- pm/common@25 -- $ sleep 1 00:34:12.495 21:33:23 -- pm/common@21 -- $ date +%s 00:34:12.495 21:33:23 -- pm/common@21 -- $ date +%s 00:34:12.495 21:33:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720992803 00:34:12.495 21:33:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720992803 00:34:12.495 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720992803_collect-vmstat.pm.log 00:34:12.495 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720992803_collect-cpu-load.pm.log 00:34:13.431 21:33:24 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:34:13.431 21:33:24 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:34:13.431 21:33:24 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:34:13.431 21:33:24 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:13.431 21:33:24 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:34:13.431 21:33:24 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:13.431 21:33:24 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:13.431 21:33:24 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:13.431 21:33:24 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:13.431 21:33:24 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:13.691 21:33:25 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:13.691 21:33:25 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:13.691 21:33:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:13.691 21:33:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:13.691 21:33:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:13.691 21:33:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:34:13.691 21:33:25 -- pm/common@44 -- $ pid=90227 00:34:13.691 21:33:25 -- pm/common@50 -- $ kill -TERM 90227 00:34:13.691 21:33:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:13.691 21:33:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:34:13.691 21:33:25 -- pm/common@44 -- $ pid=90228 00:34:13.691 21:33:25 -- pm/common@50 -- $ kill -TERM 90228 00:34:13.691 + [[ -n 5191 ]] 00:34:13.691 + sudo kill 5191 00:34:13.701 [Pipeline] } 00:34:13.720 [Pipeline] // timeout 00:34:13.725 [Pipeline] } 00:34:13.742 [Pipeline] // stage 00:34:13.748 [Pipeline] } 00:34:13.765 [Pipeline] // catchError 00:34:13.774 [Pipeline] stage 00:34:13.776 [Pipeline] { (Stop VM) 00:34:13.790 [Pipeline] sh 00:34:14.071 + vagrant halt 00:34:17.358 ==> default: Halting domain... 00:34:23.933 [Pipeline] sh 00:34:24.212 + vagrant destroy -f 00:34:26.790 ==> default: Removing domain... 00:34:27.062 [Pipeline] sh 00:34:27.342 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:34:27.354 [Pipeline] } 00:34:27.378 [Pipeline] // stage 00:34:27.384 [Pipeline] } 00:34:27.406 [Pipeline] // dir 00:34:27.412 [Pipeline] } 00:34:27.432 [Pipeline] // wrap 00:34:27.438 [Pipeline] } 00:34:27.451 [Pipeline] // catchError 00:34:27.462 [Pipeline] stage 00:34:27.465 [Pipeline] { (Epilogue) 00:34:27.481 [Pipeline] sh 00:34:27.762 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:33.039 [Pipeline] catchError 00:34:33.041 [Pipeline] { 00:34:33.054 [Pipeline] sh 00:34:33.333 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:33.591 Artifacts sizes are good 00:34:33.599 [Pipeline] } 00:34:33.614 [Pipeline] // catchError 00:34:33.623 [Pipeline] archiveArtifacts 00:34:33.630 Archiving artifacts 00:34:33.774 [Pipeline] cleanWs 00:34:33.785 [WS-CLEANUP] Deleting project workspace... 00:34:33.785 [WS-CLEANUP] Deferred wipeout is used... 00:34:33.791 [WS-CLEANUP] done 00:34:33.792 [Pipeline] } 00:34:33.807 [Pipeline] // stage 00:34:33.812 [Pipeline] } 00:34:33.825 [Pipeline] // node 00:34:33.830 [Pipeline] End of Pipeline 00:34:33.955 Finished: SUCCESS